[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/h/ - Hentai

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Starting February 1st, 4chan Passes are increasing in price.

One year: $30, Three years: $60


[Advertise on 4chan]


Thread to discuss upscales, mainly with ESRGAN as that is the best tool for the job.
All my upscales can be found here:
(Currently down), use Telegram for now: https://t.me/HentaiUpscales
It shows everything that I've upscaled, whether it contains subs, how many episodes, links to download from, image previews, upscale status and changelog.
All preview images are just run through mozjpeg at quality 90 which is nearly identical to original. The original PNG preview images are on telegram.

Torrent version:
(Currently down)
Keep in mind that torrent is an afterthought, downloading is easier with Telegram.

ESRGAN can either be used through my setup which makes it insanely easy for the average user:
(Currently down)
Or it can be downloaded from the official github:
https://github.com/xinntao/Real-ESRGAN
Though using the github version means you'll have to mess with CMD and manually process each episode.
So if you don't want that, use my setup above.
Either can be used for still images too, not just videos. Like if you want to upscale art.
It can also be used for 3D stuff, even heavily compressed twitter garbage: https://imgsli.com/MTE0NDYz
Feel free to ask for more comparison samples.

Links+Info to other random helpful tools, including leaked version of latest FlowFrames:
(Currently down)

Most people know me as "Upscale Anon", so if you have any requests or need any help, simply ask for Upscale Anon.
I may not always respond to requests, but I do consider them.
And please don't be a greedy fucker who requests like 5-20 series. It's going to make it less likely that I do your request.

Feel free to talk about other AI stuff or other helpful tools.
If you've got something really good, I might just include it in the tool list.

Previous thread: >>7965266

Check replies to OP for whenever I update the links to everything.
>>
While you guys wait for my new upscale setup, I've tested and recommend giving this a try: https://github.com/the-database/VideoJaNai
It does have some flaws like not handling DAR, fucked up framerate or letting you set model scale.
But testing it with ESRGAN's AnimeVideoV3 model at 4x, upscaling from 720p to 1440p went as fast as 1.03x speed, around something like 25 fps.
It is insanely fast, even when I was forced to use 4x instead of 2x. It comes included with AnimeJaNai model that is 2x which ended up being about 1.84x speed at around 44 fps, but I'm not really fond of how AnimeJaNai looks. It doesn't clean up as good.
The really handy thing is that ontop of using TensorRT, it's also upscaling directly into a video, saving you time as it doesn't need to extract frames, upscale then encode back into a video.
You do however need to convert the ESRGAN pth model with ChaiNNer. Which I've done so you can just use mine: https://files.catbox.moe/82oiik.onnx
Just rename to realesr-animevideov3.onnx if you care about the filename.
In the GUI, I recommend setting the ffmpeg parameters to this:
libx264 -crf 15 -preset slow -pix_fmt yuv420p10le -refs 1 -bsf:v h264_metadata=matrix_coefficients=6 -max_interleave_delta 0
VideoJaNai needs to use .mkv rather than .mp4 and it will mux the subtitles and such into the upscaled video, though one could just remove that manually if one wanted.
>>
>>8169605
Is everything currently down?
>>
>>8169915

i'm down if you're down
>>
can i get a comparison to lanczos?
>>
File: file.png (3.41 MB, 1920x1440)
3.41 MB
3.41 MB PNG
>>8169628
Did some more testing, even if I use 2 models like Ani4K and then ESRGAN, it's still more than twice as fast compared to using ESRGAN normally and the results appear to be cleaner.
Some samples: https://slow.pics/c/VpyrvD0x
Especially in dark scenes like sample 3 it's very clear that Ani4K+ESRGAN does a better job than ESRGAN by itself.
Either way, I need to do more testing with other shows.
Bible Black will get a big upgrade anyways due to downscaling to 480p rather than 720p, since the source I used was a shitty "remaster" which was just an upscale. I should probably have used the original.
>>8169915
Just the Mega, so spreadsheet and links to torrents are down, but the upscales are still on Telegram.
>>8170320
Lanczos is not an upscaling algorithm, regular scaling algorithms pale in comparison to that.
Lanczos like other ordinary scaling algorithms will just make whatever blur is in the original video bigger, aswell as all the artifacts and the noise.
Here's a sample comparing an upscale to the original video: https://imgsli.com/MTM2ODEw
I don't know what scaling algorithm imgsli uses, but there's not gonna be any significant difference of lanczos to other shit like bicubic, as long as it's not using nearest neighbour, which I don't believe it is.
A few more samples:
https://imgsli.com/MTM2ODE0
https://imgsli.com/MTM2ODIx
https://imgsli.com/MTM2ODI0
https://imgsli.com/MTM2ODI3
https://imgsli.com/MTM2ODM0
https://imgsli.com/MTM2ODQx
Ignore the color change, it's samples from before I fixed them.
Lanczos will be more faithful to the original as it's just stretching the existing pixels. It will not increase clarity, fix noise, reduce compression artifacts or eliminate banding.
>>
>>8171471
>Bible Black will get a big upgrade anyways due to downscaling to 480p rather than 720p
Wait I'm retarded lol, turns out I did downscale it to 480p before. Oh well, still gonna get an upgrade due to using Ani4K+ESRGAN rather than ESRGAN alone.
>>
As a viewer to me it seems that currently upscaling has difficulty in retaining details, especially facial details, when you have full shots and long shots.

Fortunately when it comes to sex scenes in most are close-ups and medium shots and here upscaling gives more than good results.

After all, upscaling is the answer to the lack of high definition in the source material.
>>
>>8172815
Yeah, things that are far away isn't really gonna look great, because it doesn't really have anything to work with.
It might be better now that I know of the Ani4K+ESRGAN combo, but there will always be some detail that's not 100% faithful to the original.
Though once AI gets a bit further, things might change. But it might introduce new detail that wasn't in the original.
>>
I just joined Telegram for the 1st time. Since I fucking hate it. As soon as I join your chat room/server my phone number gets banned. WTF
>>
>>8173319
Yeah right namefag
>>
File: Hana Dorei.png (2.62 MB, 1920x1440)
2.62 MB
2.62 MB PNG
>>8171471
Turns out that using VideoJaNai, I don't have to set the color matrix to BT601. Defaulting to BT709 preserves original colors of the video.
It's just my ESRGAN setup that needs to set BT601 due to it extracting frames and then encoding it back into a video.
Gave me quite a bit of headache trying to figure out why it was different.
Though if you know your source has fucked up colors, it might be worth trying to set it to BT601. Either way, the script I gave earlier can alter it afterwards if need be.
Seems like my script applies "limited" color range to the metadata, despite video not being limited, so incase anyone wonders about that, it's completely fine to ignore.
On a side note, Ajisai no Chiru Koro ni will get a big upgrade as I've got a source that doesn't have insane ghosting and is higher quality.
>>
While processing some episodes, I found out that there are a some upscales where subtitles go outside of the monitor (Like Energy Kyouka!!).
This is mainly because they are designed for their native resolution.
There are 2 simple fixes:
1: Adjusting the playres
2: Removing any {\q2} {\q1} {\q0} and {\q} filter.
I've gone with the second option, as the first one makes text rather small. The Q filters forces text to be split in a certain way, regardless of screen boundaries, so removing them fixes that.
But things that might have been 2 lines can end up being 3 lines. But rather have that than text outside the screen that you can't read.
I've made a python script that makes this process incredibly easy and can mass process folders recursively aswell as individual files: https://mega.nz/folder/QwQlxZhQ#fBfQniiipkBpaflobeK8YQ
I also included a script that can check for any files that might have this problem. It only checks and doesn't alter, incase you just want to know.
I've also compiled a list of potential shows that are affected:
Aisei Tenshi Love Mary Akusei Jutai The Animation
Ano Danchi no Tsuma-tachi wa... The Animation
Cleavage
Energy Kyouka!!
Kyonyuu JK ga Ojisan Chinpo to Jupo Jupo Iyarashii Sex Shitemasu
Majuu Jouka Shoujo Utea
Master Piece The Animation
Rance 01 Hikari o Motomete The Animation
Saimin Seishidou
Shoujo kara Shoujo e
Suketto Sanjou!! The Animation
Wagaya no Liliana-san The Animation
Youkoso! Sukebe Elf no Mori e

These will be fixed once I upload the new batch and everything, just to be on the safe side.
If you want a fix right now, I've added the fixed subs in the mega, incase you don't want to use my tool or wait for Telegram update.
>>
>>8169628
just use the vsgan-tensorrt-docker and get even faster speeds if you want to encode
https://github.com/styler00dollar/VSGAN-tensorrt-docker

>>8171471
There is literally no reason to use ESRGAN in 2024. Real-VaselineGAN has been outdone by Compact trained models now and it gives both worse results and speed than them. If you're feeling adventurous, maybe you can even experiment with SPAN. No wonder it's so ass since it's like 5 years old at this point
>>
>>8180441
>No wonder it's so ass since it's like 5 years old at this point
>Real-VaselineGAN has been outdone by Compact trained models now and it gives both worse results and speed than them
Maybe get your shit straight before shitting on something? I'm using AnimeVideoV3, not the regular RealESRGAN model.
AnimeVideoV3 is a compact model, and it was released 2 years and 4 months ago, not 5 years.
AnimeVideoV3 is not ass at all and it's really fast, especially if you use VideoJaNai to use it through TensorRT.
>docker
I'm not messing with that autism shit, especially when VideoJaNai exists.
>>
File: SchizoMoment.png (17 KB, 859x134)
17 KB
17 KB PNG
>https://ksy.moe/pandora/hentaisr/
Imagine bragging about making a new model, but then you compare it against AnimeVideoV2, rather than V3.
https://slow.pics/c/6Pfn58mk
It's doubtful they'll make anything good if they're so incompetent that they're using older versions of a model, rather than the current version that performs 100x better.
Then he's even claiming he doesn't care about comparing it like a schizo lol
Imagine claiming they would rather use models they like to compare, rather than actually just doing that. It's not that hard.
>>
File: SchizoLord.png (30 KB, 919x314)
30 KB
30 KB PNG
Jesus christ, he really is a schizo lol
>>
>>8177446
hi, sorry for what I'm about to ask because you'll probably repeat yourself. I would like to know if it was possible to explain me step by step how you work with VideoJaNai, I followed your previous posts and I understood the principle and the improvements but I don't know how to apply it.

thank you in advance, if you don't want to there's no problem.
>>
>>8184433 Part 1/2
Depending on whether you want the output file to be mp4 or similar to the original where it's mkv and it contains all the subs and shit, you'll need to strip the file beforehand if you want mp4.
This can be done with ffmpeg like this:
ffmpeg -i input.mkv -c copy -sn output.mp4
There are a few niche cases where this won't work, as some really old hentai might use shit like vorbis audio or something, in which case you'll just have to re-encode the audio.
Normally whenever I download stuff, I put things through LosslessCut to discard anything I don't want, leaving only the Japanese audio and English subtitles to save space.
If you don't do this, it's going to make the first audio stream the default shit, which with some releases might be some other language than what you want, hence why you should use LosslessCut first.
I've made a python script for the sake of remuxing videos with the ffmpeg command I gave above, it assumes ffmpeg is local to the script: https://mega.nz/folder/QwQlxZhQ#fBfQniiipkBpaflobeK8YQ
It's the "RemoveSubsAndShit.py" file.
I recommend downloading CheckResolutionAndAR.py and FPSCheck.py aswell, it'll make things easier for you.
Make sure you install the dependencies by using the bat file.
You don't need "CSV" or "psutil" that it includes, I just didn't bother removing it, as my upcoming upscale setup will use it.
The sub remover script just iterates over every file in the same folder as the script.
Use resolution and AR check to verify the resolution, that it's normal numbers like 640x480, 960x720, 1280x720 and 1920x1080.
I would not really use VideoJaNai on other resolutions than that. FPSCheck is to verify that the framerate is constant and not fucked in any way.
I have no idea if VideoJaNai can handle variable framerate, but better to be on the safe side.
>>
>>8184433 Part 2/2
After you've remuxed and checked that everything is correct, install VideoJaNai, download the ONNX model I gave earlier: https://files.catbox.moe/82oiik.onnx (Rename to realesr-animevideov3.onnx)
Alternatively, convert it yourself using ChaiNNer.
Then download Ani4K, if you want the best results rather than ESRGAN alone. It will slow things down quite a bit, but it's worth it IMO.
https://openmodeldb.info/models/2x-Ani4K
Ani4K is also a PTH model so you'd have to convert it, but if you don't want to do that, here's that model converted aswell: https://files.catbox.moe/x5ubdg.onnx (Rename to 2x_Ani4Kv2_G6i2_Compact_107500_fp32.onnx)
In VideoJaNai, set it to either single video or batch, depending on what you want.
Set input folder/video to whatever you want and set output folder to wherever you want the upscaled videos to go.
As for filename, if you want mp4, use %filename%.mp4, if you want mkv, then use %filename%.mkv
Ffmpeg settings I recommend this: libx264 -crf 15 -preset slow -pix_fmt yuv420p10le -refs 1 -max_interleave_delta 0
Ignore the presets as it overrides that.
Model 1 should be the Ani4K 2x model and model 2 should be AnimeVideoV3.
Assuming you're inputting a video that is 720p or higher, set "Resize Height Before Upscale" to 720.
Set model 2 to 720 aswell.
If your input video is something like 640x480, then first model should be 480 and then second model 720.
After that, set "Final Resize Height" to 1440, assuming you want 1440p.
If you have an NVidia GPU, make sure you set "Upscaling Backend" to TensorRT, it's the fastest. Otherwise try DirectML
"Use Automatic TensorRT Engine Settings" should be set to "Yes"
If you have some videos that are just slightly messed up resolution, I recommend re-encoding them quickly.
Like if a video is 1278x720 instead of 1280, just use ffmpeg like this: ffmpeg -i input.mkv -vf scale=1280:720 -c:a copy -preset ultrafast -crf 0 output.mkv
I think that about covers everything.
>>
>>8184555
>it assumes ffmpeg is local to the script
Fuck, a bit of a mistake there. It assumes ffmpeg is local to the script in the folder "DoNotTouch", similar to any of my ESRGAN setups.
If you don't have my setup or don't want to download it, just edit the code and put ffmpeg next to the script. Applies for all 3 scripts
>>
>>8184559
Thank you for your quick and detailed reply.
So, for example, if my source video is 720x480, I start by re-encoding it in 853x480 and then I apply your method.
>>
>>8184996
With sources that are 720x480 or 853x480, basically 480p at 16:9 aspect ratio, I am not really sure and I just use my own setup instead for that.
With videos encoded in h264, a video must have an even amount of pixels, so you'd have to do 854x480, but the output video would then become 2562x1440 rather than 2560x1440, which would trigger my autism lol.
I did a quick test though and I'm not sure if using the Ani4K+AnimeVideoV3 combo is ideal for videos of that resolution.
It seems a little bit better in some areas but a little worse in others. But then again, Ani4K is made to clean up WEB/BD sources and not DVDs.
Not as big of an upgrade over AnimeVideoV3 by itself compared to using it on higher resolution sources, but if you want just a tiny bit of an upgrade at the cost of longer processing time, I guess go for it.
I really hope VideoJaNai gets an update soon where you can set width and not just height, for both input for both models aswell as output. But if you don't care about the extra 2 lines of pixels, I guess just go for 854x480 when re-encoding it.
Alternatively, you could have VideoJaNai output a lossless video by setting CRF to 0 and then re-encode that afterwards with the exact scale you want, if you care about it.
If you do that, you might aswell change the speed preset to something like veryfast, so there's less time wasted (depending on how capable your CPU is), since it's lossless anyways and just a temporary file.
Hopefully all this info didn't leave you more confused than before you asked the question lol.
TL;DR maybe try 854x480 and then apply my method.
>>
>>8185256
oh yes ok, I understand, it's true that your script is great for these resolutions. I'll continue to use your script for 720x480 and see about those for which we have the web version with VideoJaNai.
>>
Link to prev thread?
>>
>>8186369
Use 4chanx
>>
>>8186369
It's in the OP.
Protip: Always check OP for information, regardless of thread.
>>
>>8186380
Can't click on it

>>8186375

Kys
>>
>>8188082
>Kys
He's right though. 4ChanX would solve your problems of being lazy. It would make it clickable, amongst having many other nice QOL features.
Either do that or just use the same URL on this page and replace the number with the one in the OP.
>>
Did you guys watch any of the new August titles? Bubble de House de OOO was amazing.
>>
>>8188085
Jfk just give me the link you drooling retards. I'm on cancer mobile and shit is gay as fuck. Editing link with old thread number says 404 not found. And I'm not downloading any retarded 4chan app faggots
>>
>>8189028
It's your own fault that you're a phonefag
4ChanX isn't an app, it's a browser extension.
You're the retard here who can't even open the previous thread lol.
>>
Update:
Re-upscale is nearing completion, after which I have to pull all the preview images with my script.
Shouldn't be too long until I'll reupload everything and upload the new batch too.
On a side note, Kuronoe Ookami is working on an absolutely mind blowing model that upscales Poro/A1C hentai and it's just insane.
It straight up adds detail, rather than any other upscaler that throws away a bit of detail. It even makes the background look really good.
I shouldn't even need to say anything really, the samples speak for themselves:
https://slow.pics/s/wf3sasUA?image-fit=contain
https://slow.pics/s/i8UXyNyL?image-fit=contain
I am REALLY looking forward to whenever he releases this model.
>>
>>8193222
How can this be done on single images in chainner for example? Ani4k screencap upscaled with esrgan in chainner doesnt look near this good.
>>
File: file.png (274 KB, 1139x1046)
274 KB
274 KB PNG
>>8193341
With ChaiNNer you would do it like picrel.
If the image is above 1280x720, downscale to 1280x720 first.
Maybe even downscale to 480p if it's 540p or something.
To get the frame you want to upscale, I usually just use ShareX through my MPC-HC video player at native resolution.
Here's the upscaled frames in the workflow, both using Ani4K+ESRGAN aswell as ESRGAN by itself: https://slow.pics/c/e76su5V0?canvas-mode=fit-height&image-fit=contain
>>
Does anyone have links of Please Rape Me!?
https://myanimelist.net/anime/12997/

and Kiriya Hakushakuke no Roku Shimai?
https://myanimelist.net/anime/11747

I can't log into Telegram
>>
Upscale Anon and others, what are your guys thoughts on higher FPS for videos, I believe it's also categorized as upscaling
>>
>>8194889
It's not upscaling, it's interpolating and it's quite bad in vast majority of cases.
>>
where you guys getting your ai upscales
>>
>>8197218
Telegram, listed in the OP.
It's helpful to read the OP of threads you're in.
>>
>>8194898
upscaling is also interpolating, but spatially and not temporally
>>
>>8197831
Anyone who calls interpolating for upscaling is a retard. Yes, you're scaling up the fps, but nobody in their right mind calls it upscaling.
There's a reason we have a seperate word for it. Because the very definition of "upscaling" is in terms of resolution, not framerate.
>>
>>8198203
it's interpolation in the mathematical sense, adding new data points in between existing data points
>>
>>8198623
I didn't read your post correctly, fuck you for making it confusing, flipping the words lol.
I thought you were saying interpolating is upscaling.
Yeah, upscaling can technically be interpolation, but interpolation can't be upscaling.
But it would be idiotic to call upscaling for interpolation.
>>
>>8169605
Not sure if you can please include Bubble de House de Marumarumaru in time with this batch since it was so great. Thanks for all the hard work.
>>
>>8201881
If SakuraCircle translates it before I finish everything, sure, I'll include it.
Though SakuraCircle has been slacking as of late.
>>
File: Riginetta-san no Bouken.png (2.35 MB, 2560x1440)
2.35 MB
2.35 MB PNG
New batch and fixed/upgraded versions of older stuff is now being uploaded to Telegram. Hopefully won't take too long.
Sorry for the long wait with this batch.
>>
Hey Anon
Can u please tell me what do you use to remove some of the background music on some videos? or like the male vocals too..
PS:Thanks in advanced for the new batch (I know its going to look awesome)
>>
>>8206357
I don't, it's just that some hentai releases include alternate audio that has no background music.
Though I know there's AI you can use to remove stuff yourself if you want, but I haven't really messed around with that, so you're pretty much on your own.
I think there's a tool called spleeter https://github.com/deezer/spleeter
But it was abandoned 3 years ago so there might be something better. But I haven't tried it myself.
Having heard others use tools like that, including when YouTube uses it on some videos, I would honestly not recommend.
From my experience listening to it, it turns all robotic and muffled.
>>
File: Elfen Lied.png (2.52 MB, 2560x1440)
2.52 MB
2.52 MB PNG
>>8169605
Telegram is now updated with the new batch aswell as the fixed colors of everything and the upgrade to some of the shows.
Changelogs can be found here: https://mega.nz/folder/00JjhaQD#_ZGokAa7cGGxuOQjdCy25w
New spreadsheet can be found here: https://files.catbox.moe/x8ul9u.pdf
I've also somewhat started back up the anime upscales, still mainly paywalled.
Free samples in same quality you would get from paying: https://t.me/AnimeUpscales
I'll add some stuff for free every now and then. Currently it has Elfen Lied, both softsub and hardsub in AV1, free to download and use however you want.
Information about the anime upscales: https://rentry.org/upscaleanon
I also made a rentry for the hentai upscales: https://rentry.org/HentaiUpscales
I'll upload the new ESRGAN setup later.
>>
>>8207906
Thanks for upscales and your hard work
>>
File: bath-OpenCatch~.jpg (478 KB, 2560x2758)
478 KB
478 KB JPG
>>
Upscale Anon can you please upscale or add to your todo list "Dvine Luv", it's a classic (to me atleast) and doesn't seem to be upscaled anywhere on the internet. It's 4 episodes long though so might be a hassle but thanks if you decide to upscale or give it consideration
>>
>>8207906
at this point, since you are charging money - just start your own website man, get ads, premium and what not because nobody wants to go through the hassle of scanning spreadsheets, pdfs and telegramm ON TOP of having to pay for it
wish you good luck though
>>
File: file.png (2.61 MB, 2560x1440)
2.61 MB
2.61 MB PNG
>>8218613
>go through the hassle of scanning spreadsheets, pdfs and telegram
You're making it sound way more complex than it is and a website changes absolutely nothing other than adding extra work on something that's very likely to be taken down.
It's no different than the hentai upscales, you go through the spreadsheet and see if you like things then join. Only difference is that there's a payment step, not difficult at all.
They don't have to go through spreadsheet even, just go on the rentry and it says what I offer and then you click the link to pay, simple as.
But why the dislike for Telegram? Don't have a phone, in 2024? Or just a schizo? Torrents are dumb.
Just because you don't like the way I set up things, doesn't mean nobody else does. You're one guy.
Ads are a joke too, you get virtually nothing from it unless you're gonna get in something like hundreds of thousands of users, which is just not happening.
I just don't get how you think a website would solve anything. It would effectively be the same as a spreadsheet and surely you don't expect me to host the content there, right?
Spreadsheet/rentry is safer and easily replacable, a website however is not.
You'd still have to go through the "hassle" of looking through the website for the very same information. And you'd still have to pay.
Just admit you don't like me having something on the side to try and benefit a little on the side of my extreme generosity.
Or maybe you wanted free anime upscales. In which case, I do plan on giving some free every now and then, but you could always just do it yourself, it's not that hard, nor does it take all that long with a decent GPU.
>>
Please anon do Bubble de House The Animation i beg you
>>
I found a mega that has the first episode of Maki-chan to Nau without ghosting or a watermark

https://mega.nz/folder/Ey5QFI4B#2jxijxUmYO_llTybVu9Ylw
>>
>>8193222
The god has finally returned!! Shit man, this will be a long week to redownload everything. As always thank you for your work and effort.

Also just a side note, as I am redownloading everything, I've noticed that there is a mistake with Baka na Imouto as episode 2 is posted twice as both the 1st and 2nd.
>>
>>8193222
Kinda wonder has this model will look on something moldier like Resort Boin.
>>
>>8223983
>I've noticed that there is a mistake with Baka na Imouto as episode 2 is posted twice as both the 1st and 2nd.
I don't see that issue, all 4 episodes are different. However I do notice that they're all for some reason missing episode number and the "(Upscale)" in the name for some reason.
Might be due to the filename length, as I notice it has happened partially with "Baka Dakedo Chinchin Shaburu no Dake wa Jouzu na Chii-chan OVA" aswell at least for the (Upscale) part.
Or "Aniki no Yome-san nara, Ore ni Hamerarete Hiihii Itteru Tokoro Da yo" which cuts off to just "Aniki no Yome-san nara, Ore ni Hamerarete Hiihii Itteru Tokoro Da".
God I hate how Telegram handles files sometimes.
If you plan on downloading absolutely everything and you have at least like half a gigabit internet speed or more, we can try using Resilio Sync to speed up the process.
That way you'll get everything in original filenames and such and should be within 14 hours (assuming no hiccups) and much less work for you. Just contact UpscaleAnon on Telegram and maybe send speedtest result.
>>8216787
I'll see if I can remember it with my next batch, whenever that is.
>>8219349
Already answered this earlier, SakuraCircle has been slacking so no subs for it yet, so it's gonna have to wait until next batch.
>>
Hi upscale anon please upscale enbi I requested earlier too but no upscale yet
>>
Ok how much of a naked eye difference is the new vs old? Cause I have 300gb of the hentai I've chosen. Major problem is I also heavily edit the episodes (edit it down to avg 4min from 19) so having to go through all those vids when the difference is not noticeable would be daunting
>>
>>8234092
Are you talking about the new model I used for certain shows or the 8 to 10 bit upgrade or just the color fix I applied to everything?
Because if it's the color fix, there's a simple fix you can just run on your videos to fix them: https://mega.nz/folder/QwQlxZhQ#fBfQniiipkBpaflobeK8YQ/folder/Y4A23Lqa
Just download the ColorFixer.7z and follow the instructions, it's quite easy to use and quite fast.
If it's the upgrade to the new model combination, you can see a sample here: https://slow.pics/c/e76su5V0?canvas-mode=fit-height&image-fit=contain
It's not a world of a difference in generic scenes, aswell as the upgrade from 8 bit to 10 bit, but in dark scenes, there's a big advantage.
If what you're editing is HMVs with mainly moving shots, I doubt anyone is going to be able to tell the difference (other than the color fix), unless you have a scene with very little movement.
But even then, your average viewer would probably only know if they actually compared it to the source you're using.
Honestly, the best practise for making edits is using the original source rather than an upscale, as you can always upscale in the future with better methods (assuming you encode properly to not have artifacts), but most of the time, you can't revert it.
If you don't apply any effects or transitions, then it's possible to revert it with a script I have.
If I were you, I'd probably only swap out the sources I've written in the changelog that I've replaced with a better source and then apply colorfix to everything else.
Here's a sample of how big the difference can be with the color fix: https://slow.pics/c/w45H1Yor
The script that fixes colors does make a temporary backup just incase something goes wrong, but I can't guarantee that it'll work flawlessly. It did in my case though.
>>
File: 026-027.jpg (1.35 MB, 2804x2000)
1.35 MB
1.35 MB JPG
So I did it years ago, and it was painfully slow because I was working with an 880K and 1060 3gb and spinning rust, but I upscaled my entire hentai collection with waifu2x-ncnn-vulkan, just from 240 and 480 sizes to the nearest 1080 size that a whole multiplier that waifu2x-ncnn-vulkan used. Encoded the video via CPU with HVEC on a slow preset, audio all to opus, and put them all in mkv containers. Had a bash script that automated the whole thing one video file at a time. Ran 24/7 for months.

In the end I was pretty happy with the results.

It kills me that this can all be done faster with better quality now.

Pic unrelated. I just happen to still absolutely love Bible Black. I can't wait until my daughter is old enough to watch it with me.

...
...
...

Yes, I was fucking with you on that last sentence.
>>
Anon, is there also a way to increase fps by some sort of i dont know how you call it but in between picture calculation. So if you have pic A and pic B, it calculates am intermediate so you have now A-ab-B. This would increase motion fluidity massively i think
>>
>>8235462
more fps in anime/hentai is a literal slop, it doesn't look good at all, it's just a bigger number that people think is better than smaller number - it isn't
>>
>>8235462
This is called interpolation, and in most cases it looks garbage. Especially in anime and hentai.
Main reason it looks garbage is because people never tinker with the scene change sensitivity, so it creates interpolated frames of things that have just too big of a gap.
Another reason is that most anime and hentai just have way too much movement for interpolation to look good.
And then there's people who use ancient ass models that perform like dogshit. I would not be surprised if some people still use DAIN.
>>8235413
>It kills me that this can all be done faster with better quality now.
Such is life. I remember early on with my setup, I had spaghetti code that made it take like more than twice as long as it needed to.
And I used that for a very long time with my previous GTX 1080.
And now there's VideoJaNai, which speeds things up an insane amount again or lets you run 2 models at once for the same speed but higher quality output.
>>
>>8236135
Thanks for the explanation
>>
>>8193222
wow this is genuinely great ,you wouldn't happen to have a tutorial on this would you? Really curious how hentai like Kuroinu and others would look with this treatment.
>>
>>8241166
Tutorial for training a model? Unfortunately I do not. I was barely able to do it myself but only in terms of more or less replicating existing ESRGAN model.
I did find this which helped a lot: https://docs.google.com/document/d/1aVrdBuO5ntpaWVvb1peOEZrcZ589VzDbNtLYUHkrPMQ
But I wouldn't really call it a full tutorial.
That tutorial claims you shouldn't use for example DVD as the LR input and bluray as HR input, but that is exactly what Kuronoe Ookami did.
But you have to make sure they align up perfectly so it's a lot more work to prepare the dataset.
If you manage to prepare a dataset properly with that method, you can turn DVDs basically into blurays like he did.
Whether this would have any benefit being run on something that is already bluray like Kuroinu, I have no idea. But I wouldn't be surprised if there was some benefit.
Preferably you'd train on something from the same studio, like how he's making a model specifically for Poro hentai.
Make sure you don't waste time with the same mistake I did, by having verification LR and HR images that exist in the regular dataset, preferably even have it be something from a different hentai.
As for testing how your progess is going, I recommend setting up ChaiNNer, so you can quickly check how it's performing.
Maybe it's possible to do this without using both DVD and bluray, if you can replicate the exact same blur and downscale the HR images with such a filter, but I could not get results anywhere close to that.
>>
>>8241740

Thanks for the explanation king,I'll jump in the rabbit hole and see where I end up at.
>>
Found any new way to decensor/uncensor mosaic? Other than the hen-tai and deepmosaic?
>>
Been away for a while, I saw in Telegram a bunch of re-uploads? >>8207906
Fixed colors?
checked the change logs and saw a new column in the PDF for "AnimeVideoV3 model"
Is the color thing about that (+ 10 bit video)?
Would you recommend re-downloading the new versions?
Also, bit lost here, how can I see what are the new upscales not present before

(Also is it me or is downloading through telegram really slow)
sorry been out of the loop
Thanks for the hard work as always
>>
Not sure where to ask this but its relevant to upscales. You know how some episodes of some hentai have bad framerate/interpolation on them? Specifically Mizuryuu Kei Land and Imouto Bitch ni Shiboraretai? Well there is a sale on DLsite at the moment for a ton of hentai, and Im wondering if anyone knows if those episodes are fixed when direct downloading them from there?
I may just roll the dice and buy them anyway to try, but I figured I should ask first.
>>
>>8245694
Colors and 10 bit are 2 different issues that I fixed.
The new row for the model is because I upgraded some by using Ani4K together with AnimeVideoV3.
As for whether you should bother re-downloading the new versions, I suggest you look at my reply to some other guy that explains it all and gives samples: >>8234292
To see what the new upscales are, you just open the changelog, the first one that has no letters behind it, in this case Changelog 7.0
That changelog shows everything new I added, except for some stuff I replaced with a better source, which have been written as such.
For the download speed, make sure to try using the official desktop version of telegram: https://github.com/telegramdesktop/tdesktop
If that doesn't work, maybe try K version of the browser version https://web.telegram.org/k/ but either way, if you live really far away, the download is going to be shit no matter what.
Alternatively, I guess maybe wait for torrent, whenever that happens.
Unless it's just like 2 or 3 hentais, then I can just upload to Gofile for you.
>>
Upscale Anon, how should i fix some montage software having problems with upscaled videos (some parts of it gets pixelated. Like it's having trouble) . To what i should convert it to fix that issue
>>
>>8263343
Either switch to a good editing program that can handle 10 bit videos or re-encode it to 8 bit.
If you know how to use ffmpeg, that would be setting -pix_fmt yuv420p
But it would require re-encoding, so I suggest making a temporary bloated file at something like CRF 5 for your edit and then discarding it afterwards.
If you don't know your way around ffmpeg, you can always just grab NMKoder which is on github and has a GUI, just remember to set audio to copy and set the other 2 settings I talked about.
If you don't care about filesize, you can set the speed preset to as fast as you want, like ultrafast, since it's CRF 5 anyways.
>>
>>8263343
Oh I almost forgot, alternatively, try finding a setting in your editing program called "hardware acceleration", and try to turn that off when using the original video, it might be an easy fix.
>>
File: Nmkoder_4B9LX8Q1Yi.png (18 KB, 845x335)
18 KB
18 KB PNG
>>8264214
This settings good?
>>
>>8264922
I would personally make it more of a temp file to delete later and have CRF be either 5 or 0 depending on storage and then speed preset as fast as it goes, but if you want to keep that altered file and throw away the original, sure, those settings seem fine, just make sure that in the "Audio" tab, it's set to "Copy".
Keep in mind that it might take a while to encode the video and that the filesize will of course be quite a lot bigger than the original.
>>
>>8265017
Should i bother with Handbrake?
>>
>>8265069
At that CRF it doesn't really matter what you pick, but personally I find handbrake to be more clunky and you already have NMKoder downloaded so I see no reason why you should switch.
>>
File: waifu.jpg (311 KB, 1860x1408)
311 KB
311 KB JPG
Is picrel still the goat of upscaling anime pictures?
or is there something better now with new AI tech?
>>
>>8266419
>still
>waifu2x
Waifu2x has always been shit lol, it was never the goat.
ESRGAN and custom models is where it's at.
But if you're willing to sacrifice some faithfulness, generative AI can yield better results.
>>
File: vs.png (1.72 MB, 1965x980)
1.72 MB
1.72 MB PNG
>>8266697
>Waifu2x has always been shit lol
so you're saying left is that much better than right?
>>
File: file.png (884 KB, 1346x467)
884 KB
884 KB PNG
>>8266728
Nice attempt at strawman, but there's more than just 1 model and I have no way to tell HOW you used the model.
Nor do I know what the source looks like, so I can't tell which one is more faithful.
And both of them look as if you've zoomed in 500% or some shit, that's not really how you look at images in any realistic scenario, you look at the full image fit to the monitor's height/width, not a hyper zoomed cropped version.
But you can even do a simple google search "ESRGAN vs Waifu2X" and find plenty of samples that show that ESRGAN is just better, even with the faster anime model, see picrel.
But here's a proper sample comparing Waifu2x to ESRGAN's AnimeVideoV3 aswell as RealESRGAN-x2plus: https://slow.pics/c/uOKye4pJ
If you really think Waifu2x is the better option in that case, with all the blur, you need some glasses.
x2plus is kindof in the middle, better than waifu2x, but not as clean as AnimeVideoV3. But this is just scratching the surface by using official models.
Go to https://openmodeldb.info and explore a bit and you'll find even better models for your use case.
>>
File: vs2.png (2.63 MB, 1947x984)
2.63 MB
2.63 MB PNG
>>8266763
>use case
I think that's the problem here.
with https://slow.pics/c/uOKye4pJ I can see that Waifu sucks at smaller images sure.

So far I only used it to make already big images(1500x1500) even bigger. (most of the time 6x-8x scaling)
I tested some of the ESRGAN models and for that use case and it doesn't look like the models are that much different from Waifu.
I just wish there were denoise options similiar to waifu in chaiNNer.

Anyway thanks for the recommendation.
>>
>>8267435
Even at 6x, a 1500x1500 would become as much as 9000x9000, that is an incredibly niche use case.
If you were using just 4x, then ESRGAN would easily beat Waifu2x, but I know using ESRGAN past 4x can cause issues. Original image size shouldn't matter.
But either way, past 4x is kind of a meme except for very basic flat drawings with close to 0 detail, as you're trying to enhance stuff that's basically not there.
At that point it would probably be better using generative AI with low denoise setting and have it create new detail, if your GPU can handle it, but it's a lot more work per image compared to just running a simple model.
Going an extra mile, you could train a LoRA on that specific artist I guess lol.
Btw with ESRGAN it very often helps to downscale a little bit before upscaling. Like how I downscale 1080p videos to 720p before I upscale them to 1440p. It can make quite a big difference.
Heck maybe it would even benefit Waifu2x, I've never tried that.
Denoise options is not needed as a lot of ESRGAN models are trained to remove compression and noise, you'd just have to find model variants that either is trained more/less agressively in that regard.
Like the AnimeVideoV3 basically removes all the noise you throw at it, hence why it's not good for stuff like 3D detailed blender stuff.
Also as an alternative, you might want to give Topaz Gigapixel a try. It costs money but you can just torrent like everyone else.
>>
>>8223108
It may not mean much.
I'd imagine posting is too fucking bothersome for most ppl with the captcha n shit but I'll endure it to say "Thank you sir" :)
>>
is there a list of all the hentais that have been upscaled so far?
like in text form.
>>
>>8268762
nvm found it.

OP, how long does it usually take you to upscale one 20 minute video?
>>
>>8267520
After testing around a bit more with chaiNNer and ESRGAN models just wanted to say sorry for trying to say that Waifu is any better or the same as ESRGAN.
ESRGAN is the goat.

>Btw with ESRGAN it very often helps to downscale a little bit before upscaling.
That's so interesting and paradox.
Is that >>8193359 how you usually do it? The double upscale/resize?

>Topaz
holy moly that name brings back memories.
I remember using some sort of Topaz software to upscale images back in 2010?
Of course without AI the output was really bad back in the day.
>>
>>8268868
There's so many factors involving this so I can't give a solid estimate without making a LOT of assumptions.
With my setup using a Ryzen 9 5900x and an RTX 3090, assuming 720p or 1080p 23.976fps input at 16:9 aspect ratio, using TensorRT through VideoJaNai, being able to encode while you upscale, using 2 models at the same time, it takes generally 0.404x of realtime, so basically if the video is 20 minutes, it would take 50 minutes to upscale.
If I'm running too low on RAM or doing other shit in the background, it can drop down to 0.379x or lower, just depends what you're doing.
If your video input is lower/higher resolution or you downscale to something higher than 720p, it can take longer, aswell as if you upscale to higher than I do, at 1440p.
Depending on your CPU and your encode parameters, the CPU could bottleneck you aswell.
Using different models also makes an impact on speed, and if you want speed, you could just run the ESRGAN model alone and it would probably be close to realtime, if not faster.
It really doesn't take long to upscale videos in 2024 if you have a decent setup.
>>
>>8268975
>sorry for trying to say that Waifu is any better or the same as ESRGAN
No worries lol.
>That's so interesting and paradox.
I can't say for sure, but what I think happens is that all the artifacts and noise gets smaller, making it easier to work with, but so do the lines, which often makes them cleaner.
>Is that >>8193359 how you usually do it? The double upscale/resize?
Not necessarily double, just generally down to the nearest "common" resolution like 480p, 720p 1080p and such.\
With videos I always take anything above 720p down to 720p, and if it's between 480p and 720p, it depends just where exactly it is. If it's 576p, I often let it stay as that, but if it's below that and 480p, I generally downscale to 480p.
Very rarely that I touch anything below 480p, but if I do, I tend to not downscale anything. But who knows, maybe it there is a benefit even then, but I can't be bothered testing.
>I remember using some sort of Topaz software to upscale images back in 2010?
I didn't try it that long ago but it's not that long ago Topaz was quite shit, and in some aspects it still is, at least for videos, unless like I said, it's for detailed 3D stuff.
Plus Topaz is significantly slower for video than ESRGAN, which is another reason to only use it for those scenarios. Shit runs at like 5 fps.
Back on the topic of downscaling, this is some code I have for my python ESRGAN setup, so you can see when and what I downscale to, even if it's for videos:
if aspect_ratio > 1.4: # 16:9 aspect ratio
if 300 <= height <= 530:
return "853:480"
elif 531 <= height <= 650:
return "1024:576"
elif 651 <= height <= 3000:
return "1280:720"
else: # 4:3 aspect ratio
if 200 <= height <= 300:
return "320:240"
elif 301 <= height <= 550:
return "640:480"
elif 551 <= height <= 650:
return "800:600"
elif 651 <= height <= 2000:
return "960:720"
>>
Is there a way to bypass telegram download limits as im trying to redownload the whole collection.
>>
>>8269284
There is no download limit, it's just how shit the Telegram servers are structured, if you're far away, you'll get low speeds.
Though some clients can perform worse, so make sure to use official desktop version: https://github.com/telegramdesktop/tdesktop
Either that or wait on torrent, though I have no idea when I'll bother setting it up.
>>
>>8269290
Im getting a notification that download speed is limited, buy telegram premium or some shit like that.. i dont even know if telegram premium exists here.
>>
>>8270862
Oh fuck right, I was confusing download with upload my bad. Yeah telegram premium is an option, though if you're far away, it's still gonna be shit.
>>
File: donthurtmenomore-0001.png (168 KB, 211x374)
168 KB
168 KB PNG
I have not seen you for 30 days. Did you get banned or something?
>>
File: downscale2.jpg (577 KB, 3829x2077)
577 KB
577 KB JPG
>>8269078
>I can't say for sure, but what I think happens is that all the artifacts and noise gets smaller, making it easier to work with, but so do the lines, which often makes them cleaner.
makes sense.
So far every upscale I did improved in sharpness when doing the double resize. I just use 50% resize for every image and the same upscale model both times.
Since I'm currently working with high res images It takes some time to double upscale them but it's totally worth it. (From 2,429x3,200 to 19,432x25,600)
>>
>>8278090
>19432x25600 just to zoom in on nipples
Lol. I'll never not be amused by you.
Glad it's working though.
If you don't want insane filesizes though, I recommend using Mozjpeg at Q 95 which is basically gonna be identical.
Or if you can be bothered getting JXL to display thumbnails (if you even care about thumbnails in explorer), then that would give significantly better compression ratio.
>>
File: downscale.jpg (562 KB, 2033x1863)
562 KB
562 KB JPG
>>8278411
>19432x25600 just to zoom in on nipples
Well the reason for that is not only the nipple(but it's kinda fun to figure out which model did the best job by zooming in) I'm currently into ordering custom wallscrolls.
And for a 30"x45" wall scroll you need 8848x13677 pixels if you don't want blurry and pixelated shit.
Depending on the way you're poisoning the picture or if it's a render maybe even more.
>If you don't want insane filesizes though, I recommend using Mozjpeg at Q 95
good to know thanks.
my biggest file so far was a 127MB .png which is kinda hard to handle if you have to send it to China.
>>
>>8169605
did you fine tune or train any of the models you're using?
also do you take requests?
>>
>>8284810
>did you fine tune or train any of the models you're using?
I did attempt training one, but the best I could do was something that basically copies ESRGAN, but I could probably get something better done if I spent more time with it I guess.
>also do you take requests?
What kind? If it's hentai, you're free to ask me to upscale anything and I might do it next batch, whenever that happens.
But there's no guarantee, unless you pay me that is, which you can find info on here: https://rentry.org/upscaleanon
>>
>>8285023
just found out you already did it
>Choisuji Volume 1 + 2
nice brings back memories.
thanks for the work
>>
>>8285023
>I did attempt training one, but the best I could do was something that basically copies ESRGAN
Then your issue is your dataset. The hr images should be uncompressed (png, bmp, etc), and the lr should be created from the hr using something like WTP Dataset Destroyer.
https://github.com/umzi2/wtp_dataset_destroyer
You should use an image quality assessment (IQA) metric to filter poor quality or useless images from your dataset. Use the inference_iqa.py script with the hyperiqa metric. A minimum score of 0.66 for images above 1080p, and a minimum of 0.75 for images below. It filters out images with grain, depth of field, general blur and compression artifacts.
https://github.com/chaofengc/IQA-PyTorch
You can also process your dataset for complexity scores, and discard images that score below 0.4. I use it, but it's not necessary with the above metric.
https://github.com/tinglyfeng/IC9600
This script processes images for any kind of jpeg blocking artifact, where the lower the score, the better. I discard scores above 1.5. Also not strictly necessary, but useful.
https://github.com/gohtanii/DiverSeg-dataset/tree/main/blockiness
Create 512x512 tiles from your images.
https://github.com/Kim2091/helpful-scripts
You can use Czkawka's similar image function with the DoubleGradient hash to automatically remove similar and duplicate images by keeping only the largest of the identical files. Run it multiple times until it doesn't find more duplicates.
https://github.com/qarmin/czkawka
Use chaiNNer to rename and resave files as 8 bit .png.

Get your images off of SadPanda torrents and screenshots from bluray rips from nyaa.

Waiting for people to train something that works for you isn't really a valid approach if you want to improve upscale quality. The public models are stagnant to say the least. Virtually all of them use the same old datasets from 2+ years ago.
>>
>>8287606
>Then your issue is your dataset
Did you not read the rest of the post? Specifically the part where I said I could probably do better if I spent more time with it?
I simply gave up after just doing a bunch of trial and error and didn't feel like it was worth the effort.
It's also highly unlikely that someone like me could make the next best upscaler.
>Waiting for people to train something that works for you isn't really a valid approach if you want to improve upscale quality
Why don't you train one then, seeing as you're very into it?
>The public models are stagnant to say the least. Virtually all of them use the same old datasets from 2+ years ago.
Maybe, but they're good enough for the vast majority and the dataset being a few years old doesn't really matter that much, it's more how you prepare the dataset, unless we're talking stable diffusion, where you want newer dataset to learn specific characters.
>>
>>8287653
>It's also highly unlikely that someone like me could make the next best upscaler.
Very misguided statement. It's not effort that is your issue. It's your approach. I created a basic dataset consisting of 100 random galleries (100 Game CG, 100 Artist CG, etc, approximately 8000 tiles total) from each category (torrents) on sadpanda using the tools and rules mentioned above, and it produced better results than 4x_IllustrationJaNai_V1_DAT2 after 100k iterations without exception for digital artwork using the same DAT2 pretrain it was trained on and the only requirement I had for these galleries was that the images had to be above 512x512 for tiling purposes. It's not a matter of effort, it's a matter of adhering to some basic guidelines for quality control, which is largely automated. It's real life models that are difficult to create. Anything pertaining to digital artwork is incredibly easy. It's just that you can count the number of people actually using IQA tools for dataset creation on one hand on the upscale discord.
>>
>>8291606
>Very misguided statement. It's not effort that is your issue. It's your approach
You have literally 0 knowledge of what I've done, stop speaking out of your ass lol.
You have no way of knowing what dataset I used or how I prepared it or what settings I used.
The problem in my case is effort, as I didn't bother giving it any serious attempt.
But like I said, since you're so passionate about it, and think it's so easy to make a better model, why don't you prove your own words and do it then post on openmodeldb?
Make the next best hentai/anime upscaler instead of trying to shit on me for waiting for others to do it, when you won't do it yourself.
>>
>>8278576
Can i have link to that ERSGAN model?
>>
>>8309210
Pretty sure he's using this one: https://openmodeldb.info/models/4x-realesrgan-x4plus-anime-6b
>>
>>8309210
>>8309239

>>8278411
>Mozjpeg
I still can't manage to install this.
guess I'm too retarded for all this programing environment and terminal stuff.
there is no simple .exe for it right?
>>
>>8309570
You go here: https://github.com/mozilla/mozjpeg
Take the latest release with a Windows build (4.0.3), extract it wherever, then use the cjpeg.exe with commandline.
You can delete the "Static" folder in it as it's not used. It's even possible to delete majority in the "Shared" folder while still having it work, but I have no idea what impact it might have.
I use a batch script in order to process images, incase you want, make 2 folders local to the "cjpeg.exe" called "input" and "output", then a file called "convert.bat" or whatever you want to call it, then edit in notepad and put this:

for %%a in ("F:\mozjpeg-v4.0.3-win-x64\shared\Release\input\*.png") do "F:\mozjpeg-v4.0.3-win-x64\shared\Release\cjpeg" -outfile "F:\mozjpeg-v4.0.3-win-x64\shared\Release\output\%%~na.jpg" -quality 90 "%%a"

Replace the filepaths with whatever you want. I have them hardcoded as I don't plan on moving them.
Change the -quality 90 to whatever you want. I personally use 90 for my upscale previews as it looks basically identical unless pixel peeping back and forth hard.

Alternatively, I have a setup pre-made for this https://files.catbox.moe/io8m1h.7z
Though I don't remember which version of mozjpeg it is, but it shouldn't make that much of a difference.
It has removed a bunch of files and I never noticed any side effect of removing them after I used it for quite a long time.
This setup is not hardcoded path either so you can put wherever you want and it asks for the quality level when you run it.
>>
>>8309595
>Alternatively, I have a setup pre-made for this https://files.catbox.moe/io8m1h.7z
ok I got this one to work thanks for the explanation.
>>
So,if AnimeVideoV3 currently's the best model for animation videos,what's like the best model for real life stuff?
>>
>>8310563
For both real life and 3D with fine detail I use Topaz VEAI.
>>
>>8169915
Hopefully yes
>>
for upscales on the telegram, what parameters should I use to compress them without losing too much quality (with nmkoder) ? preferably 265
>>
>>8315453
It depends if you're using NVENC or your CPU.
For CPU, I'd probably go with CRF of 18 if using H265.
Though that is based on what I would use when encoding from scratch like how I encode the upscales at CRF 15 with H264.
H265 behaves a bit differently and 18 is more or less equivalent to 15, but should be lower filesize.
If you want even better compression, I'd suggest giving SVT-AV1-PSY a try, at CRF 20, but that you'd have to use in ffmpeg or something with CMD.
But even regular SVT-AV1 would outperform H265, though the one built into nmkoder is quite outdated by now, so it wouldn't be nearly as fast as it can be.
For that, I'd suggest the custom handbrake builds unless you want to deal with CMD: https://github.com/Nj0be/HandBrake-SVT-AV1-PSY
As for what you should personally set the CRF to, is a bit tricky. You'd just have to test around and see how much you're willing to sacrifice, but work from maybe 25 with H265 and then decrease it if it feels like it's not high enough quality for your liking.
Make sure to set it to 10 bit aswell and remember that NMKoder re-encodes the audio by default, so if you don't want that, set it to copy. Otherwise use Opus at around 96kbps.
If you want even better performance, you'd have to upscale from scratch, as re-encoding multiple times adds more and more quality loss, depending on how heavy you compress it.
>>
>>8315496
Wow thank you very much for this quick and detailed explanation, I'm going to look into SVT-AV1 which I didn't know at all. You're right, we all have different requirements in terms of video quality. I'll do some tests for that.
And thanks again for the handbrake link, it's really going to help me.
>>
>>8315595
Np. Hope things work out for you.
AV1 can at times make things a bit too blurry for your liking, if you notice it do that, add into the advanced parameters in HandBrake "--enable-tf 0" without the quotation marks.
>>
>>8315646
One last question: what are the encoder options, and how should I modify them?
>>
>>8315685
I would set the tune to SSIM (Not subjective SSIM) probably. And the preset depends of how much in a rush you are. 4 is good, but 5-7 isn't bad either, though from my knowledge, they did some retarded shit where 6 is tied to 7.
Patience does pay off though.
Make sure video encoder is set to AV1 10-bit.
In the audio tab, if you don't want to re-encode the audio, set it to passthrough. Otherwise set it to the same as NMKoder, Opus at about 96kbps.
Set it framerate to same as source. Other than this, I don't think there's anything you really need to do, unless you want to experiment with some advanced settings. For that you'll probably have to check the AV1 reddit
>>
Got any advice on how to do 3D CGI works? Those are increasingly rare nowadays so the only good ones are from 10+ years ago. Not to mention most of these old works are made by boomer japs who have toasters so the resolution is usually quite bad.
>>
Telegram is so fucking trash, I've been trying to export all the videos for the past couple of weeks using multiple different clients and,

>transfer speed is shit
>crashes randomly
>can't resume

Make a fucking torrent already.
>>
>>8320184
How to do 3D CGI works? You mean how to upscale them?
If so, I would suggest using Topaz VEAI
>>8320460
Does it hurt to ask nicely? I'm not obligated to give you anything at all. You're lucky to even have the Telegram lol.
Telegram is fine, crashes are very rare and doesn't happen if I leave it exporting in background.
>can't resume
You can set it to export between certain dates, so use smaller batches if you're experiencing crashes.
>transfer speed is shit
Did you try getting premium? Though if you're far away, it's gonna be shit either way.
Whenever I download from someone from Bangladesh, it takes quite a while.
>>
I'm curious as to what you guys prefer btw: https://slow.pics/c/nC62teGc
The usual Ani4K+ESRGAN, or AnimeJaNai.
A few notes on each:
ESRGAN: Cleans up really good and makes lines look really good, but at the cost of faithfulness and destroys background + intentional blur and slightly alters colors.
AnimeJaNai: Significantly less of an "upgrade" and doesn't get rid of compression artifacts nor noise but is faithful both in terms of color, lines, background and intentional blur.

Video samples:
ESRGAN: https://gofile.io/d/kH2es6
AnimeJaNai: https://gofile.io/d/9PBkpB

Also curious if you guys would want AV1 rather than H264 for approximately 1/3rd of the filesize while still being basically the same quality.
A sample of AV1 (Using AnimeJaNai): https://gofile.io/d/VzpkJc
It will likely not work for those of you using a phone, but then again, wtf are you doing watching 1440p stuff on a tiny device lol.
It'll also most likely not work in your favorite editor, if you're a HMV creator, meaning you'd have to re-encode it first, but some editors don't like my current H264 10 bit either and re-encoding doesn't take long.
>>
>>8320650
>A sample of AV1
Here's a bigger sample, using the full episode, AV1 with AnimeJaNai and H264 with the current Ani4K+ESRGAN: https://gofile.io/d/baGBvw
>>
My vote's for Ani4k+ESRGAN
The difference's too noticeable for me
PS: I just downloaded that Maria Sample the ESRGAN one and changed to bt601,dunno if its faithful..but to my eyes it looks even better than the AV1 one in terms of color..
>>
>>8322870
Thanks for the vote.
The video as-is should be faithful to the original, especially the AV1 that uses AnimeJaNai, but I agree that it can sometimes look better when altered.
AnimeJaNai should make it basically 1:1 with original video, whereas ESRGAN is maybe 98%, changing to bt601 however alters it quite significantly.
>>
>>8320657
I don't like av1
>>
>>8326584
Do you mean you don't like the AnimeJaNai model?
AV1 is just the video codec.
>>
>>8184126
Am I supposed to know who this is?
>>
>>8333566
Guy who's claiming to make the next best hentai upscaler, but then compared his current results to the ancient AnimeVideoV2 rather than V3 which is 10x better lol.
https://ksy.moe/pandora/hentaisr/
They tend to post a lot of good releases on nyaa.
>>
some of these are pretty good upscales desu
>>
>>8169605
Upscale Anon If it's not too much of a hassle, may I request "Last Waltz: Hakudaku Mamire no Natsu Gasshuku"?
>>
>>8169605
Any plans to release your version of Esrgan soon?
>>
>>8354697
Shit, I kinda forgot lol. And with how good VideoJaNai was, it made me forget it even more.
Anyways, I just finalized it, made a HEVC and SVT-AV1 version of the script aswell for those looking for better compression.
The automatic color fix is a bit iffy, so the setup includes the manual scripts to drag and drop files onto to fix them aswell as revert the fix.
Make sure to verify videos with the included scripts too that checks if the framerate and resolution+aspect ratio is okay.
They're very easy to use and takes only like 2 seconds.
Ignore the fact that the scripts are named V6 despite setup being V7, just what I named them after changing them so many times.
Here's the new ESRGAN 7.0 setup, make sure you have python 3.10 installed and run the "Install requirements.bat" before using anything:
https://mega.nz/folder/QwQlxZhQ#fBfQniiipkBpaflobeK8YQ
Enjoy.
>>8169605
Linking to OP for easy access
>>
Could you possibly do an upscale of the remastered Bible Black Episode 3? It never got a western release
>>
>>8354806
>>8354806
Hi Upscale anon, was trying the new build, but when I try to run the Install requirements" I get "Could not find a version that satisfies the requiriment time/csv" I read somewhere that that's cause they're part of the python installation? (using 3.13 btw) but when I try to open UpscaleV6.py (or any other) I see a cmd window opening and closing right after
>>
>>8364832
Like I've said, make sure you use python 3.10. You use the bleeding edge, you tend to get cut.
I have not tested it for anything else than 3.10 as that's what I use for everything.
If you absolutely need to use 3.13 for some reason, you'll have to get ChatGPT to fix it.
If you plan to fix it: To see what the actual problem is, just open a CMD window manually then run the script within that.
>>
>>8364832
Drag and drop the video ontop of the script if you aren't command lining it
>>
>>8364959
The script isn't designed for drag and drop, you just have the files in the same folder as the script then run it.
>>
>>8364967
works on my machine, but could be the way I have python set up

I'm noticing right now on some of the images currently being output on a 480i video being HEVC upscale, the upscaled images have some flattening of the pixels going on and erasing (very small) detailing, anything I can do so it doesn't completely destroy it as bad or am I just SOL
>>
>>8364971
>works on my machine, but could be the way I have python set up
You sure you don't have the videos in the same folder? Because dragging a file on the script will also just run the script.
480p videos aren't exactly going to look great when upscaling with ESRGAN, nor is ESRGAN really that good for preserving small details.
For that you'd need to use a model like AnimeJaNai. You can use it through the VideoJaNai program very easily, but most 480p videos will end up with a fucked up resolution unless you manually resize afterwards.
I suggest trying the model 2x_AnimeJaNai_HD_V3Sharp1_UltraCompact.onnx. Just run it twice if you need the output to be higher than 960p.
Even on videos above 480p, you might want to use it, depending on what your preference is. Here's a sample from a 720p source: https://slow.pics/c/nC62teGc
Ignore the fact that I used a bad source for that episode, it still shows how different the models behave. AnimeJaNai is less "clean", but also much less destructive.
>>
File: evtexture.png (170 KB, 845x348)
170 KB
170 KB PNG
So I'm going to chime in more or less to talk about something more tech and methodology related so sorry if it isn't on topic enough. I've done a bunch of paper reading on the subject with both image and video resolution. It seems to me the main issue is still that when we're talking about video upscaling of anything AI, we're still talking fundamentally about approaches that use image upscaling and apply it to all the frames of a video. But that doesn't take into account the extra information a video provides in terms of temporal information or motion which you can get over several frames. I'm talking about an approach that is similar to what Nvidia does with DLSS but with less information than is afforded by a game engine.
If you look into academic literature for state of the art video upscaling, this is exactly the approach one should be taking. However, the main issue is that nothing is really set up to do this type of thing on even a consumer ready scale. Not that there is anything quite wrong with it but that does seem like a major issue that will impede progress on getting better video output on superresolution, regardless of whatever model or tools used.
That being said, now for the controversial bit because of some ludditism here. I think https://openmodeldb.info/models/2x-AnimeSharpV2-RPLKSR-Sharp is really the new king in terms of speed to quality ratio with anime as PLKSR (https://github.com/dslisleedh/PLKSR) has academically been measured as the most efficient model this year. I know it has problems with depth of field but for me does great and has a soft version which outperforms most Real ESGAN models. However, I am split because he did release https://github.com/Kim2091/Kim2091-Models/releases/tag/2x-AnimeSharpV3 which he claims that is better than v2 in all situations and solves the depth of field issue. While I disagree with the first assertion, the 2nd one is valid so there might be a battle brewing on if RPLKSR can replace Real ESRGAN at last.
>>
File: paperswithcode.png (138 KB, 1140x1071)
138 KB
138 KB PNG
>>8367705
On cutting edge image upscaling models, technically, using traditional academia, the best right now with actual code according to Papers with Code is DRCT(https://github.com/ming053l/DRCT) but nothing is fine tuned with it for anime specifically and what results there are kinda suck as seen in https://openmodeldb.info/models/4x-mssim-drct-l-pretrain which also would run way too slow for anything other than needing the best quality and doing a one-off for single images.
One of the better transformers models that came out last year is DAT (https://github.com/zhengchen1999/DAT) and it seems like most of the high quality implementations for upscaling focus on this model for the most part for "highest quality" as it is one of the fastest transformer models out there. Out of this, I think the IllustrationJanai model works https://openmodeldb.info/models/4x-IllustrationJaNai-V1-DAT2 probably the best for anime content.Given the above, if quality matters and you can do it non-realtime, these would be what I would actually upscale any anime videos with. I really wish there was more focus on 2x but it is what it is.
>>
>>8367705
The problem with 2x-AnimeSharpV2_RPLKSR_Sharp_fp16 is that it enhances jagged edges, unlike 2x_AnimeJaNai_HD_V3Sharp1_UltraCompact, which honestly does a better job at upscaling.
Speed to quality ratio doesn't realistically matter, only quality, as long as the models are fast enough, which the RPLKSR version of AnimeSharp isn't.
With a single frame, AnimeJaNai took 3.6 seconds while AnimeSharp took a whopping 46 seconds and did a worse job at it. It looks oversharpened and it doesn't clean anything up and it fucks with the background too much: https://slow.pics/c/M8xYvKdG
Speed to quality ratio is a joke with AnimeSharp RPLKSR, and even if you had all the time in the world, it's still better to use the significantly faster AnimeJaNai model.
It might have a better SSIM, but upscaling really about being 100% faithful to source. If you want that, you would not be upscaling. It's supposed to enhance it.
Papers and real world experience are often two completely different things. It might look good on paper, but in a real world usecase, it does not look as good.
If you're going to change anyone's mind, it's much more effective to show images that compare the result of each model aswell as the time it takes.
>>8367753
>https://openmodeldb.info/models/4x-IllustrationJaNai-V1-DAT2 probably the best for anime content
That isn't even made for anime lol. It's made for illustrations, as the name suggests.
But even on the illustrations, it trades blows with the ESRGAN version: https://slow.pics/c/GfArurPG
So far, from testing and from looking at comparisons, AnimeJaNai is the best if you want it to preserve details but enhance a little bit. For more agressive enhancement, then AnimeVideoV3.
>>
>>8364949
I uninstalled python and Installed 3.10
but I get the same when I try to run the Install requirements.bat
I installed pillow instead of pip (using chatGPT for help)
but now when I run the scripts it just creates an AAC audio file separately without output, and when I run it with an audioless clip it does nothing

I'd like to use your setup to upscale some clips and to color fix the previously downloaded upscales, but not sure what I'm doing wrong
>>
>>8368766
You made sure to properly uninstall the old version, restart PC then install 3.10 and add it to PATH during install?
>I installed pillow instead of pip
Pillow is a dependency, pip is something python uses to install Pillow.
Either way, I can't really do much unless you provide logs. Open CMD and run it manually within that, both the install requirements and the upscaling script. That way you can copy the output.
Btw, if you're upscaling 720p sources, I'd recommend using VideoJaNai instead, it's much faster and more convenient. Can be used with 480p stuff too, but with regular hentai, it often results in wrong aspect ratio, but can be fixed manually afterwards.
>>
File: seriallain.png (1.23 MB, 1300x640)
1.23 MB
1.23 MB PNG
>>8367767
I haven't done much A to B testing for accuracy as far as hue shadeifferences but yeah, I agree, RPLKSR then is not ready yet as you say with the issues with the background. But as I said, no real great models are really ready and for my needs, it works alright even if the architecture shows promise and even the author went back to ESRGAN with AnimeSharpV3 which superseded it. I disagree with the magnitude difference, RPLKSR was 10.3 seconds vs almost 1.9 seconds for me than with RealESRGAN but neither are really running realtime here and I'm probably running a faster GPU than you are.
I disagree strongly with your assessment that DAT merely trades blows with the compact model. The reconstruction is clearly better, if you for example look at one of your comparisons with Serial Lain Experiments, for example. The loops around the patch has clear artifacting with inconsistent shaded colors and patches of white even inside those patch holes/loops on the ESRGAN model while on the DAT side, there is none of it and you'll see similar issues in high frequency areas that ESRGAN has a harder time upscaling that gets a much cleaner result with DAT. Moreover, the reconstruction on the drawn lineart is aesthetically better in my opinion than with ESRGAN's reconstruction. I really dislike the way ESRGAN reconstructs lines, it's usually looks like a crude approach to vectorizing the image. This is why I prefered the RPLKSR model too where the line reconstruction actually looks like it could be drawn while it doesn't look great with Real-ESRGAN.
Don't get me wrong, Real ESRGAN and its finetunes are still pretty great, but this is where state of the art upscaling with transformer models have started to yield clear better results if you care about quality and can afford that extra time to get better details for single images. But for videos, that additional time for using DAT in a potential model does add up per frame so it's probably not that worth it to do just yet.
>>
>>8368954
>RPLKSR was 10.3 seconds vs almost 1.9 seconds for me than with RealESRGAN but neither are really running realtime here and I'm probably running a faster GPU than you are
It really depends on the input resolution aswell as upscale factor and what method you use to upscale.
I was using the specific models I mentioned in ChaiNNer with a 720p source image upscaled to 1440p.
My GPU is an RTX 3090.
For all I know, the smaller gap in your case could be due to different resolution or workflow. But maybe also because you're using a different model?
I have specifically said 2x_AnimeJaNai_HD_V3Sharp1_UltraCompact, but you keep going on about "RealESRGAN", not AnimeJaNai. For all I know, you could be talking about RealESRGAN_x4plus_anime_6B or AnimeVideoV3. Both of which are slower than AnimeJaNai.
But that time is for doing a single frame at a time. Real world applications are much faster, like when you use VideoJaNai with TensorRT then the AnimeJaNai model is definitely realtime, even faster than realtime.
Though realtime doesn't really matter, as long as it's not obnoxiously slow, which RPLKSR is.
>if you for example look at one of your comparisons with Serial Lain Experiments
Cherry picking is very easy. That specific sample might look better with DAT, but for example the CD album art - Hanato Meto, if you look at the eyes, with DAT, it's missing a big portion of the sclera on the right eye, whereas ESRGAN kept it.
But honestly, it's much more a matter of preference, they both look good in their own ways and there isn't a world of a difference in quality.
Either way like I stated before, it's not made for anime, but illustrations, so for the topic of this thread, it's not all that relevant. But feel free to keep talking about it if you want, it's still interesting to read either way.
While DAT might not be as good for anime yet, people are still finding ways to improve ESRGAN models too, like fine tuning it for a specific studio, like done here: >>8193222
>>
I know VideoJaNai isn't your advertised thing, but how do I set the color matrix on the workflow?
I experiencing some color change on upscales from JaNai and your color fix script isn't compatible with those outputs.
Alternatively, how would I go about using one of their models in your set up? Im fucking around with JaNai's "2x_AnimeJaNai_SD_V1beta34_Compact.omnnx" and its giving me the best results for what I'm working with but the color shift after the denoise is setting off my autism.
>>
>>8369637
From my experience with it, setting the color matrix with VideoJaNai shouldn't be necessary as it preserves the original colors, other than a tiny bit if using AnimeVideoV3 model. If using AnimeJaNai, there should be no color difference.
But maybe that specific model you're using behaves differently. After all, it's V1, and beta at that.
>your color fix script isn't compatible with those outputs
Strange, it should work just fine, but I guess it depends on what codec you're using. If it's H265 (HEVC/x265) or AV1, then I don't know what the parameter is to change the color matrix.
If using H265, maybe try editing the script and change "h264_metadata=matrix_coefficients=6" to "hevc_metadata=matrix_coefficients=6".
I haven't tested it, but maybe it would work. For the reverse script do the same, except have the value be 1 like the original script.
The script can simply be edited in notepad.
>Alternatively, how would I go about using one of their models in your set up?
Unfortunately, I don't think you can, as my setup doesn't use onnx models. But I haven't actually tried as I haven't had a reason to do so.
If it is possible, you'd probably have to edit the upscale_frames function in the upscale script.
>>
In other news, Kyokusai released a V2 of their HentaiSR (HSR) model. And it's just as shit as I thought it would be.
Then again, that is to be expected, when they brag so hard about making the next best hentai upscaler.
It cleans up way less than AnimeJaNai but also enhances the noise an obnoxious amount.
Making it more appealing to just keep the original video rather than upscaling.
Tested V1 too, and it handles the noise a lot better, but it basically cleans nothing else.
Here's a comparison for those interested: https://slow.pics/c/B0yVZlji
>>
How does one train these AI upscalers?
I'm only familiar with SD models.
>>
>>8370398
This guy provided a lot of useful information regarding that >>8287606
You can also read my post here regarding that which links to a document explaining how to do it: >>8241740
Good luck.
>>
>>8369096
I use ComfyUI because ChaiNNer has not released a new version for a while and Spandrel, the code library that is used as the backend, has had updates since then for supporting some new models like SeemoRe and MoSR and RPLKSR DySample but none of them are relevant to our comparison. Mostly, I find that ComfyUI is more optimized to run models faster given Stable Diffusion is a lot heavier than upscaling images. Mileage may vary though.
I used Real-ESRGAN to refer to the base model on which AnimeJaNai was based on because that is what we are really comparing and arguing over here but sure, we can use the actual model name here if it makes it less confusing for you. I agree though anything more complex than AnimeJaNai doesn't have enough resources to run realtime on your average system with a discrete GPU but honestly, as long as we can't bolt Pytorch to mpv or have a native inferencing library easily used by media players, a lot of limitations exist using VaporSynth for hooking things together. It works but it's less than ideal. And as I said as part of my opening paragraph on this, we should be moving to a system when that happens that can use multiple frames because this is a video to extract information for reconstruction that an image upscaler doesn't have access to in order to get better quality.
As far as what you pointed out here, I do agree that the small scelera omission is there but as you said, preference matters and I do think that DAT doing better on the big picture things contributes to it doing better overall to me. For example, what I see more in that area is the hair eye overlap on the left eye where Real ESRGAN makes a mess of it while DAT handles it better. I do agree there is room to better but at some point, you do run into limitations of the model. This can be seen in the LLM world as you can't scale 1-3 million parameter models to do things 9-13 billion models can in intelligence and etc. 70MB is only so much data to fit for upscaling.
>>
>>8370493
I have never touched ComfyUI due to having seen it with Stable Diffusion and I am scared of the cursed spaghetti mess lol.
>I used Real-ESRGAN to refer to the base model on which AnimeJaNai was based on because that is what we are really comparing and arguing over here
But the base model isn't really what we're comparing though, it's AnimeJaNai, a finetuned model. That is how we should compare. Since after all, we're using it for those tasks, not general upscaling.
>I do agree there is room to better but at some point, you do run into limitations of the model
That is the case with any model, but ESRGAN can be improved an immense amount if finetuned correctly, to the point that there isn't really a need to pick anything anything that does "better" at a much slower speed.
>70MB is only so much data to fit for upscaling
Sure, but right now it does quite a good job for that considering it's trained to be good for ALL anime. Now consider how much that data is worth if it's dedicated purely to 1 hentai studio, like I mentioned earlier here: >>8193222
When it does that good of a job, there's simply no reason to get any better, not that I think it can anyways.
Even if it means you'll have multiple 70MB models, it's not really the end of the world. I'm used to having 6GB models for Stable Diffusion lol.
But even "2x_AnimeJaNai_HD_V3Sharp1_UltraCompact" uses a mere 1.16MB and "2x-AnimeSharpV2_RPLKSR_Sharp_fp16" uses 14.6MB.
You could have a finetuned 1.16MB model that performs a lot better than AnimeJaNai.
>we should be moving to a system when that happens that can use multiple frames because this is a video to extract information for reconstruction that an image upscaler doesn't have access to in order to get better quality
That is probably not feasible, as it would take a very long time to extract the information from all the frames, aswell as the amount of storage. And if you use a small portion, it wouldn't really work, since hentai has so many identical frames.
>>
>>8370493
Realtime is honestly nothing important IMO, as long as we don't have to wait an eternity for an upscale. Like my upscales which uses Ani4K+AnimeVideoV3 which run at 0.404x of realtime (assuming 23.976 fps).
And when other architectures are significantly slower, it doesn't matter that much if they're maybe on average 5% better quality or something.
Maybe if it was 5% slower aswell instead of orders of magnitude slower, then it would be used.
But is it really better if it's using 14 times more data to achieve only a minor uplift in quality?
>>
>>8370503
>That is the case with any model, but ESRGAN can be improved an immense amount if finetuned correctly, to the point that there isn't really a need to pick anything anything that does "better" at a much slower speed.
Density of information you can pack inside a weight does matter, look at how much you have to "overtrain" models in the LLM space to get minor gains at benchmarks and then still not having skills like spatial awareness and riddle solving at small sizes. These models are only good enough to do specific things and aren't good enough to be generalist models at larger sizes can do more things that you need to pay a lot of money for like coding or legal document interpretation/writing you would never trust a smaller model to do well.
>That is the case with any model, but ESRGAN can be improved an immense amount if finetuned correctly, to the point that there isn't really a need to pick anything anything that does "better" at a much slower speed.
I pointed it out already the advantages already on alternative model choice. Sure, you can say it isn't worth it but it is there and the field is still actively researched.
>Even if it means you'll have multiple 70MB models, it's not really the end of the world.
I really don't know at that point, to me, I rather have a singular better model 2x the size rather than two separate models to run based on situation especially when the density is increased by an exponential value given how many more connections the weights can have between themselves at a larger size if trained correctly so it's not only fitting in 2 smaller models half the size, it could be more like 4.
>You could have a finetuned 1.16MB model that performs a lot better than AnimeJaNai.
Right but I just need to remind you here at this point because we're taking this line of thinking way too far. Efficiency isn't just about model size, runtime and amount of operations needed also play a part and they are correlated but not tied to model size.
(1/2)
>>
File: file.png (17 KB, 1117x67)
17 KB
17 KB PNG
>>8370503
>That is probably not feasible, as it would take a very long time to extract the information from all the frames, aswell as the amount of storage. And if you use a small portion, it wouldn't really work, since hentai has so many identical frames.
You can't honestly say this when Nvidia DLSS exists and does upscaling for a video game which has tight requirements on real time input and timing on showing a frame on a screen. Video by comparison most of the time runs slower than half what a "well-running" video game is required to do at 60 FPS. There is so much that can be done given double the time of doing a video game tier upscale that even without the specialized inputs, it's not like you couldn't at least get close enough to that quality if all the pieces were there. It's an infrastructure problem really and I will continue to maintain it is the case.
>>8370509
>Realtime is honestly nothing important IMO, as long as we don't have to wait an eternity for an upscale. Like my upscales which uses Ani4K+AnimeVideoV3 which run at 0.404x of realtime (assuming 23.976 fps).
Define "eternity". We already have consternation on upscales of a single image taking double digit of seconds time to do for a single frame in prior posts. It's quite clear that it does matter. If we're talking about using state of the art, it actually takes a full minute to do an upscale with an academic model of a single frame. We're talking about 6.951×10^-4 speed or 3 whole decimal digits of magnitude difference.
Refer back to how I qualify efficiency as not only being of model size but of runtime and operations used. But I get you. Even though the original ESRGAN paper was written 6+ years ago and we've gotten much better models, it is worth asking if the juice is worth the squeeze. The state of the art model has the following numbers.
>1
>Hi-IR-L
>SSIM 0.9366
>PSNR 33.13
Is the .02 SSIM and ~1.5 PSNR gain on paper, not judged qualitatively, worth that?
(2/2)
>>
>>8370503
>I have never touched ComfyUI
Don't
>>
>>8370526
>You can't honestly say this when Nvidia DLSS exists and does upscaling for a video game
I can, and that is just it. Video games. You do realize that there is a big difference with video games and actual videos, right?
DLSS relies on the game engine.
Also, DLSS looks like dogshit.
>Define "eternity"
Anything slower than what I get with the multi model combo that gave 0.404x speed which relies on a 2x model and then a 4x because AnimeVideo forces you to run at 4x when using VideoJaNai.
If you want to convince me that it's fast enough to bother using, test it in a real world sample. Use it in VideoJaNai using TensorRT and show the speed you get with it and with what GPU.
Maybe also test AnimeJaNai, to get a frame of reference.
>Is the .02 SSIM and ~1.5 PSNR gain on paper, not judged qualitatively, worth that?
Paper literally does not matter. The paper just shows what is closest to the source material, which is in all cases useless. If you want that, just train a model that does basically nothing.
>>8370520
>I pointed it out already the advantages already on alternative model choice. Sure, you can say it isn't worth it but it is there and the field is still actively researched.
Sure there might be advantages when it's properly trained, but it's still going to be significantly slower and not a world of a difference compared to a properly trained ESRGAN model.
>Even though the original ESRGAN paper was written 6+ years ago and we've gotten much better models, it is worth asking if the juice is worth the squeeze
Well yes, because you can get much better results with a model that is blazingly fast. Sure you can get as good results with the RPLKSR, but at the cost of a significantly slower model.
Speed matters a lot. Maybe you think it's worth quadrupling the processing time for that extra 5% quality, but most won't.
It's like encoding AV1 at preset 0 instead of 4, takes significantly longer, minimal gains.
Slower is better, but not always worth it.
>>
>>8370592
>I can, and that is just it. Video games. You do realize that there is a big difference with video games and actual videos, right?
I explained the difference already. If you are purposefully being blind to it and plugging in your ears and waving it away, I'm not going to convince you it could be great. But at the very least, it is possible to do it, there isn't a world where you couldn't do it but it doesn't exist today.
>Speed matters a lot. Maybe you think it's worth quadrupling the processing time for that extra 5% quality, but most won't.
This isn't what is indicated by others in the thread. It matters to some extent, and by and large, I would say for video specifically, the speed quality tradeoff after a certain point of being able to run real-time is pretty much a hard requirement. However, I personally would go use better models in a one off instance for best quality upscaling in image scenarios and do for non-anime stuff. There are just situations where quality does matter even at that 5%. Not everything is broadly applicable on that front. And we'll get to a point where the power requirement needed for things will broadly diminish and move that curve up with newer hardware. When the difference becomes a second instead of 10, people will switch over if it is better. It's just like how people use x264 medium now with beefier machines after 4 cores stopped being standard even though x264 faster is technically the sweet spot for encoding.
>>
>>8370844
>I explained the difference already. If you are purposefully being blind to it and plugging in your ears and waving it away
I mean, that seems to have been what you've been doing the past few posts lol.
>it is possible to do it, there isn't a world where you couldn't do it but it doesn't exist today
This reads like a schizo post lol.
>This isn't what is indicated by others in the thread
Maybe because they literally do nothing themselves and just wait for me to do it?
If people were upscaling themselves, they want good speed.
>However, I personally would go use better models in a one off instance for best quality upscaling in image scenarios and do for non-anime stuff
You keep forgetting what thread you're in, this isn't images, we're upscaling anime and hentai here.
And it's not just one or two videos, it's literally thousands.
You need to consider the real world usage of the models like if it's slower, you'll basically have a heater running for longer and pay a lot more for your electricity bills too.
Too many words, too few tests. If you want to convince anyone to use your meme upscaler, like I've said, run tests and show comparisons and how long it took.
>even though x264 faster is technically the sweet spot for encoding
"faster" is definitely not a sweetspot by any means for those who care about encoding. It's only really used by shitty tube sites that don't have time for anything and need to push content as fast as possible.
Nobody who takes encoding seriously would really use anything faster than the preset "slow". But the real sweetspot, that's the default preset of medium.
Ideally, everyone should be using veryslow for most tasks, it's fast enough. But unfortunately we don't live in an ideal world.
>>
>>8370874
>I mean, that seems to have been what you've been doing the past few posts lol.
You aren't putting a technical argument to any of your responses to me, they all are gut feeling and opinion based despite me trying to get you to do this.
The rest of your post is pointless to answer to if you aren't going to in good faith have a conversation instead of handwaving shit away. So yeah, let's just end it here since you don't seem open to anything alternative as opposed to whatever arcane methodology works for you which is fine. I'm not spreading a religion here, I wanted to just have a conversation which doesn't seem possible here so okay, I'll stop bothering you if you aren't open to it.
>>
>>8169605
By any chance do you still have version 5.2 of your ESRGAN? I want to be able to upscale images and I'm not sure if version 7.0 can.
>>
>>8371217
You're literally just basing everything off of PAPERS lol. Papers and real world experience are two completely different things.
I am open to new things, but you need to show actual samples in order to convince someone, something I've been saying over and over yet you can't get it into your thick skull.
Come back when you're ready to properly show why someone should use your meme upscaler.
>>8371324
Sure do, here you go: https://mega.nz/folder/QwQlxZhQ#fBfQniiipkBpaflobeK8YQ
7.0 could probably be made to upscale images, just copy over the models and the batch script, but it'll behave the same, though it would save you from having 2 different setups if you want it combined.
>>
>>8368790
Yes, properly uninstalled and restarted,
I installed 3.10 (adding to Path as well during installation)
When I try to run the script I just get an AAC audio file.
When running manually it says
FileNotFoundError: [Errno 2] No such file or directory: "ESRGAN 7.0 Directory\\Temp\\ColorCheck\ClipName_frame.png'
when I run the install requirements I get the same
"Could not find a version that satisfies the requiriment "for both time and csv

Gonna try to use chatgpt later on to see what the deal is since I haven't had much time lately, thanks for the answers though
>>
>>8375550
Strange. Does the filename or the directory have any weird non-english characters? If so, maybe try changing that.
As for the requirements, it's even stranger, I installed it a long time ago and I get the same error when trying to install it.
Having read some google searches, people say like you did, that it's already installed with python, but some of those posts were over a year ago and I made the setup before then and I could install them just fine.
But I'm guessing it shouldn't be an issue then.
>>
>>8376035
yeah is strange, honestly I'm not sure if not installing that is causing the problem (as in, if it is actually already there or not, since the error doesn't say anything about that)
But I did try changing the directory before my last post, I put it directly on C:\ in a folder called "ESRGAN7" to avoid any shenanigans, and that was result (just getting the audio file)
>>
>>8376366
What about the filename of the video you're trying to upscale? Have you tried changing that to a simple name?
>>
Is there a update coming soon? There has been some good releases lately
>>
Can i use Chainner with Ersgan? Or i have to use Github version? Also what happened to guide and 5.0 OP link
>>
>>8380330
You can use the ESRGAN model in chainner yes.
As for the guide, it's included in the setup, look at the Readme.txt. 5.0 is old, either way you can find 5.2 and the new 7.0 here: https://mega.nz/folder/QwQlxZhQ#fBfQniiipkBpaflobeK8YQ
5.2 is the same as 5.0, just with capability of doing single images with a slower model.
7.0 only does video, and it requires python, but it's easy to install.
>>
File: 1731702898394320.png (10 KB, 656x106)
10 KB
10 KB PNG
???
>>
>>8380366
>You can use the ESRGAN model in chainner
Which one should i use? Model i mean
>>
File: file.png (2.34 MB, 2560x1440)
2.34 MB
2.34 MB PNG
>>8381023
Depends on what you want really. If AnimeVideoV3, then just drag onto ChaiNNer realesr-animevideov3-x1.bin or the amount you want to upscale with. So if you want 4x upscale, you drag on realesr-animevideov3-x4.bin
If you want to use the other model that's in the 5.2 setup, you just drag that on.
>>8380994
Make sure you're running python version 3.10, and that it's installed to PATH.
I haven't seen a CMD like that with "PS" and colored stuff by default, so if that's some linux shit, I have not tested the script on linux, only Windows.
Though I do notice that I forgot to include in the install requirements file pillow.
So open CMD and type "pip install pillow" then try again.
Also do "pip install numpy"
>>8376366
Maybe this issue above is part of the problem you were experiencing, try pip install pillow and pip install numpy then re-run the script.
>>8169605
Updated setup to 7.1 to remove redundant pip install of CSV and Time aswell as adding pip install of Pillow and Numpy
https://mega.nz/folder/QwQlxZhQ#fBfQniiipkBpaflobeK8YQ
If you don't want to redownload the entire thing, just go in the loose files and download the new "Install requirements.bat" to replace the old one.
Alternatively, just open CMD and type these 2 things:
>pip install pillow
>pip install numpy
Or edit the existing batch script to have those.
Sorry for the issues with the script. Hopefully everything will work properly now.
>>
>>8381206

What's this from? Thought it was Tsundero 4 but it wasn't.
>>
>>8385755
OVA Incha Couple ga You Gal-tachi to Sex Training Suru Hanashi
>>
>>8386236
Is this from an upcoming batch? cant find it on telegram
>>
>>8387803
Yes it is
>>
Was Kowaremono (from Bootleg studio) already upscaled ? Cant seem to find it :/
>>
File: Kowaremono II.png (2.31 MB, 1920x1440)
2.31 MB
2.31 MB PNG
>>8390631
Kowaremono 1 and 2 was upscaled. You can see this by doing ctrl+f in the spreadsheet or searching in telegram
>>
>>8391167
Yeah i saw that on telegram but it's not the one one I'm refering to.
I'm speaking about that one : https://www.underhentai.net/kowaremono-the-animation/
>>
>>8381206
script breaks if the video has no audio
>>
>>8391673
Ah I see, maybe I'll do it at some point.
>>8391674
Thanks for the information, maybe I'll fix it some day, but for now just use the older version or VideoJaNai I guess or create some temporary audio into your file and discard it afterwards.
>>
File: 1730567953984033.jpg (89 KB, 1418x370)
89 KB
89 KB JPG
>>8391714
Actually there's another problem, if the video is shorter than 50 seconds it can't do the color check and will fail.
>>
>>8392150
Yeah it's set to 50 seconds to try and prevent checking black frames at start of videos. If you have shorter clips than that, adjust it accordingly.
Open the script in notepad/notepad++ and edit this line:
time_position = 50 # Start at X seconds
It's under the "determine_best_color_matrix" function.
>>
Which esrgan model i should download for Chainner use for Anime arts
>>
The color check section basically makes the script unable to handle very short videos, trying to upscale some animations ripped from hentai games and these things are like not even a second long and if will fuck the script up if the there's no difference in the first frame then it'll try and do another comparison 5 seconds later.
>>
>>8400314
With such very short videos, there's not all that much point in using the newer setup, as the main benefit is with big videos, avoiding having to dedicate a ton of space all at once during upscaling, saving you from between 1/6th to 1/2 the storage required to upscale.
It'll still write just as much data though, but won't keep it around until everything is done.
So I'd suggest using the earlier setup in the mega. Alternatively, you could maybe try to set the time position to 0, assuming the first frame isn't a black one.
Maybe even 0.2 or whatever, but I haven't tried using decimals personally, so I have no idea if it would even work.
You can also adjust the time step it does, to change it from 5 seconds to something else, aswell as how many steps it's willing to perform.
>>8396218
Depends on your preference really. I'd suggest going to modeldb and finding some models you think look interesting and give those a try until you find something that suits your needs.
>>
>>8400897
I do like the color thing though, it outputs colors closer to the source video.
>>
>>8400922
If you want, and adjusting the script didn't work, you could always use the older setup and then run the manual color fix script from the new setup on the videos to fix them.
I wish all this color shit wasn't a thing, gave me such a headache to get to work and even then it doesn't always work. Maybe one day I'll have a script that handles it differently, but I dunno how lol.
>>
Can i request Jitaku Keibiin? We got the 2019 Version and Jitaku Keibiin 2 Upscaled,

https://myanimelist.net/anime/34638/Jitaku_Keibiin



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.