[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: file.png (1.54 MB, 1000x1309)
1.54 MB
1.54 MB PNG
Won't fuck up the thread subject edition.

gedg/ Wiki: wiki.installgentoo.com/wiki/Gedg
IRC: irc.rizon.net #/g/gedg
Progress Day: rentry.org/gedg-jams
/gedg/ Compendium: rentry.org/gedg
/agdg/: >>>/vg/agdg
Render bugs: renderdoc

Requesting Help
-Problem Description: Clearly explain the issue you're facing, providing context and relevant background information.
-Relevant Code or Content: If applicable, include relevant code, configuration, or content related to your question. Use code tags

Previous: >>101946689
>>
>>102005995
32 bits float lighting in ur game engien, u have it? post it
>>
live
>>
File: file.png (633 KB, 862x882)
633 KB
633 KB PNG
>>102005995
Even when I went through the book quickly, it really does contain enough information to know how AAA engines work. It can send you down a rabbit hole with the extra links and books to get further information, but the book in itself should be enough.
In the end it looks like 80% of the game engine is focused on graphics and tools, a lot of tools, so making GUI is the bane of the whole thing.

I hate GUI with a burning passion, you cannot make native applications without making a compromise, some engine use C# and Avalonia UI for their engine tools, some even go with Java, duplicating work even. It's just almost impossible to make it in C/C++/Rust/Zig... And no, GUI that are builtin on top of openGL suck hard. I'd rather go with SDL2/3 cpu raster graphics for maximum compatibility and lightness.

I wish Sunvox maker shared his GUI setup, it plays nice and doesn't consume resources like crazy, while also rendering all the fancy things and graphs. It even handles text input right.
https://warmplace.ru/soft/sunvox/
>>
>>102006744
Software rendering your GUI does not somehow make it better, it just makes it slower
>>
>>102006744
should I use Qt for my engine UI?
>>
>>102006782
You'd be surprised that Excel team programmed their "cells" from scratch and copied the system's look down to autistic levels. All that was in actual software rendering. And nobody can shittalk Excel.

CPU rendering is amazingly fast, people forget that things get autovectorized under the hood, so even the dumbest brute forced loop runs at amazing speeds. Some people go the software rendering route because it saves resources, as in systems and battery resources.
I chose Sunvox because it happens to run on everything under the sun while maintaining the exact same look. It's the true crossplatform experience.
>>
>>102006844
Wasn't excel trying to gloat by saying they have 500 quadrillion lines of code or something like that
>>
>>102006844
>CPU rendering is amazingly fast
No, GPU rendering is amazingly fast
CPU rendering is slow in comparison
CPU rendering does not save resources, it is strictly worse in every way apart from it being easier to do
>>
>>102006744
>GUI that are builtin on top of openGL suck hard.
what does that even mean? the GUI rendering is the easiest part, the hardest part is designing widgets that don't suck
>>
>>102007051
I think this might be the guy who visits every few months and strongly advocates for software rendering but none of his arguments actually make any sense
>>
how do you load resources? not just png textures etc. i'm thinking about something like unity, where you have prefabs that have references to other prefabs, resources like textures etc. looks like the hardest part of game engine.

any tutorials?
>>
>>102007346
prefab definition schemes in something like json or even lua
>>
>>102007346
There's no tutorials about those things exist. Too complicated for newbies and trivial for pros. Unity generates UUID for every resource and then use them. I think UE uses relative paths to .uasset files. I compute 64 bit hash from path and some extra data and use that as resource ID. And resources are getting processed into optimal format, you don't want to decode png at runtime, you want to memcpy BC7 into staging buffer.
>>
>>102007346
It's not the hardest part of game engine lmao, it's just ugly because you need to come up with a naming convention for every single asset
>>
>>102007455
>trivial for pros
I don't think so, he's kind of touching on one of the most difficult and unique parts of game engines, how the content pipeline is set up
>>
>>102007346
I wouldn't call it the "hardest" as much as there's just a lot of different ways to do it that are all going to have their own strengths and weaknesses. It definitely can get complex depending on how many different types of assets you have to deal with. It's also one of the more boring (well, boring to most people) areas of engine design so it doesn't get much attention compared to more flashy stuff. You should give us a bit more detail on what game you're making and what your expectations are for how such a system should operate. I'll just write some random things here and hope they help you.

I should note now that, if you aren't doing a game that you expect to be a massive time and effort commitment, you can also probably do without building a standardized content pipeline like Unity/UE/others do. Your "content pipeline" can just be a bunch of different functions that load your files in sequence.

Regardless, as to what you're describing, you generally want some kind of standardized data definition format that lets you store both metadata about both binary assets (I.E. textures, meshes, audio ect), and game content (as you say, prefabs, or other types of content that you generate, either through some kind of editor tool, or just writing it by hand). This could be any standardized text format (JSON could definitely work). Your definition files should be plain text, not binary, because that makes merging changes essentially impossible.
>>
File: file.png (58 KB, 776x657)
58 KB
58 KB PNG
>>102006744
It looks like I was correct, nothing beats DirectDraw and raw pixel editing in the end. It's always the simplest dumbest solution that wins.
>>
>>102008001
>nothing beats DirectDraw and raw pixel editing in the end
GPU rendering
>>
File: Cat.png (24 KB, 274x258)
24 KB
24 KB PNG
>>102006782
>Software rendering your GUI does not somehow make it better, it just makes it slower
Honestly, if you can't implement a simple sprite rendered GUI at interactive frame rates on a modern 4GHz system, you need to seriously introspect your ability and your technical choices. Game devs did this kind of thing on a measly 80486 machine without much of an issue.
>>
>>102008119
can vs should
>>
Some FPSs have the HUD react to the player moving and jumping around by shifting around. How is this done? At least at a pseudopods level?
>>
>>102006860
I can tell you never graphics programmed in your life.

>No, GPU rendering is amazingly fast
GPU rendering is amazingly fast on a complex scene. We're talking about compute tasks that are very large and embarrassingly parallel.

>CPU rendering is slow in comparison
Again, depends on what you are rendering and how much.

>CPU rendering does not save resources, it is strictly worse in every way apart from it being easier to do
CPU rendering can be an order of magnitude more efficient on resources, such as buffers, memory utilisation and other boilerplate stuff.

People don't realise that while GPUs are pretty fast for certain types of compute workloads, it also comes at a monumental setup cost. You need to create a ton of resources, pipelines, buffering and round-trip IO just to get the ball rolling. All of that costs memory, latency and compute time. To justify using a GPU, the workload has to be large enough where the total compute time savings outweigh the setup costs. Sometimes it's just not worth it, and you are better off running the job directly on the CPU.
>>
>>102008202
same how you'd implement a parallax effect, or you can implement a spring simulation and just make the player affect the string. the HUD just follows at a clamped max distance of course.
>>
>>102008202
I would compare the player acceleration to the up and right vector of the camera and then use that to move the hud in the opposite direction
>>
>>102008225
Literally everything you posted is wrong
The cost of GPU programming is the mental overhead it imposes on the programmer, I guess that scared you off and lead you to make up these cope stories you tell yourself
>>
Non meme answers only.
Has anyone here done cool stuff with Rust and Vulkan/OpenGL? Cool stuff means physics simulation, game stuff etc.I'm currently evaluating adding rust to my skillset. It seems like a fun language.
>>
File: 1674293618233449.jpg (950 KB, 2730x4096)
950 KB
950 KB JPG
my game engine is the best game engine BAR NONE!
>better than godot
>better than unity
>better than unreal
>better than beavy
>>
>>102008268
Child, I'm talking from experience. I've been graphics programming long, long before your autistic father squirted you into the guts of that land whale you call mother.
>>
>>102008336
No you haven't, because you posted factually incorrect information
GPU is faster even at small scales, it doesn't use more memory, and it doesn't have any significant setup costs (even if it did it wouldn't matter because you only pay it once)
>>
>>102008320
based. that's the attitude
>>
>>102008349
Hahah, show me, faggot.
Show me where you found that morsel of information.
>You can't
Because you are full of shit and you know it.
>>
>>102008381
I didn't "find" it, I know it from experience
>>
>>102008381
Why should he be held to a higher standard of proof than yourself
>>
>>102008387
You know jack shit.
You demonstrated no skill.
Everything you said so far is speculative dogshit coming from a 14 year old child.
>>102008387
>>102008394
Stop samefagging.
>>
>>102008320
That's a man
>>
>>102008225
>We're talking about compute tasks that are very large and embarrassingly parallel.
It's telling that you think shading UI elements isn't embarrassingly parallel
>>
>>102008424
You haven't demonstrated any skill either. You just posted bullshit. You don't seem to understand how the GPU works. It's not a powerful machine with a startup penalty, it's a parallel processor that if you don't use it, it's just sitting there doing nothing. Drawing stuff with the GPU is essentially free, you issue the draw command and it does its thing while your program continues to operate
>>
>"GPU is fast trust me bro"
>the most common occlusion culling technique is to use CPU to render a low resolution buffer for the GPU to finally test if it can draw something or not
>trust me bro i've been doing graphic dev for my whole life
what a fucking retard
>>
>>102008520
the technique you're referring to uses the GPU to draw a low resolution buffer, not the CPU
>>
>>102008309
I did, I was writing a game from scratch, now I am writing a gui library and a mspaint clone with it.
I used winit and wgpu for everything, it's really nice to use compared to c++ and opengl.
If you don't want to use wgpu for any reason, you can also use bindings to opengl or vulkan and it will still be a lot less annoying than doing it from c++. I definitely recommend wgpu though.
There's also this new library, from the guy who wrote most of wgpu: https://github.com/kvark/blade/tree/main/blade-graphics
I haven't tried it yet, but it looked cool.
>>
>>102008520
People might do frustum culling on the CPU, but I don't think anyone does any occlusion culling on the CPU like that.
>>
>>102008474
Apparently according to you,
>compiling shaders costs nothing
>creating render pipelines costs nothing
>transferring buffers from system RAM to VRAM costs nothing
>encoding render commands costs nothing
>changing pipeline states costs nothing
>reading buffers back into system RAM costs nothing
>managed buffers where local copy and a duplicate is kept in VRAM costs nothing
>not being able fill tread groups adequately on the CPU costs nothing
>complex branching on the GPU costs nothing
>command dispatch latency costs nothing
>VRAM memory swap costs nothing

Looks like it is you that don't seem to understand how the GPU works.

GPUs are good for big jobs, period. Small jobs, such as rendering a crappy in-game GUI, CPUs are capable doing it faster. Again, as I said a million times before, this also depends on what you are doing.
>>
>>102008653
>>not being able fill tread groups adequately on the CPU costs nothing
i meant GPU
>>
>>102008653
>>compiling shaders costs nothing
small cost paid once on startup
>>creating render pipelines costs nothing
small cost paid once on startup
>>transferring buffers from system RAM to VRAM costs nothing
small cost paid once on startup
>>encoding render commands costs nothing
a phrase you made up that doesn't mean anything
>>changing pipeline states costs nothing
cheaper than software rendering because you're doing it on the GPU
>>reading buffers back into system RAM costs nothing
why would you need to do this
>>managed buffers where local copy and a duplicate is kept in VRAM costs nothing
why would you need to do this
>>not being able fill tread groups adequately on the CPU costs nothing
a phrase you made up that doesn't mean anything
>>complex branching on the GPU costs nothing
cheaper than software rendering because you're doing it on the GPU
>>command dispatch latency costs nothing
a phrase you made up that doesn't mean anything
>>VRAM memory swap costs nothing
a phrase you made up that doesn't mean anything

That's a hilarious post, does /gedg/ finally have its first schizo?
>>
>>102008629
>I did, I was writing a game from scratch, now I am writing a gui library and a mspaint clone with it.
That's cool. Do you like it? How much experience do you have with graphics(even hobby experience counts)?
>I used winit and wgpu for everything, it's really nice to use compared to c++ and opengl.
I feel like it's still not ready. Also, I got too familiar with opengl so a different shader language is not really what I'm looking for.
>There's also this new library, from the guy who wrote most of wgpu: https://github.com/kvark/blade/tree/main/blade-graphics
I'd like to avoid things that are someone's experiment at least for now, when I'm still in the learning phase. Maybe I can do that when I start tinkering with shit.
>>
>>102008653
>>102008669
>>>not being able fill tread groups adequately on the CPU costs nothing
>i meant GPU
If you are instance rendering quads with a unified UI shader you will easily saturate your core count while fragment shading
>>
>>102007460
the problem is not to just load primitive resources like images, audio using GUID's, but to deserialize a object graph with references to other object graphs/resources (with possible circular dependencies), especially if you want to use multi threading .

maybe i need to look how it is implemented in godot.
>>
File: RONG.gif (1.09 MB, 270x150)
1.09 MB
1.09 MB GIF
>>102008706
>a phrase you made up that doesn't mean anything
Now I know you are a nocoder, jeetcoder at best.
Holy shit, you never heard of the concept render command encoder? Or a compute command encoder? Or command buffers in general? Fundamental concepts like this? If you are actually a graphics programmer, you would know about this.

>>reading buffers back into system RAM costs nothing
>why would you need to do this
Because sometimes you need the result back from a GPU? You know, like in a practical real world application? If you are actually a graphics programmer, you would know about this.

>>managed buffers where local copy and a duplicate is kept in VRAM costs nothing
>why would you need to do this
Because there are multiple ways for allocating GPU resources? Ever heard of shared, private, managed resources? They all have different memory access latency costs associated with it? If you are actually a graphics programmer, you would know about this.

>>VRAM memory swap costs nothing
>a phrase you made up that doesn't mean anything
Did you know that GPU drivers can swap texture resources back and forth from system RAM, depending on how much VRAM is utilised? If you are actually a graphics programmer, you would know about this.

>>not being able fill tread groups adequately on the CPU costs nothing
>a phrase you made up that doesn't mean anything
https://developer.nvidia.com/blog/cooperative-groups/
Aka threadgrop occupancy. If you are actually a graphics programmer, you would know about this.

>>complex branching on the GPU costs nothing
>cheaper than software rendering because you're doing it on the GPU
Typical nocoder making assumptions about performance without profiling. If you are actually a graphics programmer, you would know about this.

Anyway, I stopped reading the rest of your shitpost. But feel free larping as a graphics programmer though, keeps me amused.
>>
File: u1ODYEF7WT.png (1.21 MB, 1434x781)
1.21 MB
1.21 MB PNG
>>102008720
>That's cool. Do you like it? How much experience do you have with graphics(even hobby experience counts)?
I have hobby experience only. I did a bit of hobby gamedev in Godot which taught me a bit about shaders, and I did pic related in C++/openGL for a game jam, but even that was 99% glsl. I only started doing "real" graphics programming when I started with wgpu. I followed this tutorial: https://sotrh.github.io/learn-wgpu/ which in retrospect wasn't that great, but it was enough to get me started. I've had a lot of fun with it since then.
>>
>>102008887
A render command encoder is something that only exists in... Metal
Bringing data back from the GPU is something that very rarely needs to be done, definitely not for drawing a GUI
Swapping textures in and out of memory is also something that doesn't need to be done
Filling up thread groups is something completely irrelevant and it seems you just threw it in there for the hell of it
Profile any software renderer and it's gonna be slower than what you're doing on the GPU, the GPU is literally free performance, you don't seem to understand that
>>
>>102008979
Nice. I do have some OpenGL knowledge so I'm thinking of doing that with rust for the time being for getting myself familiarized with rust. Maybe I'll like rust enough that I'll pick up wgpu in the future, who knows.
Rust bait posts are plenty on /g/ but rust usage posts are very rare. Thanks for sharing your experience.
>>
>>102008987
>A render command encoder is something that only exists in... Metal
It exists in Vulkan:
https://docs.vulkan.org/spec/latest/chapters/cmdbuffers.html

It exists in Direct-X:
https://learn.microsoft.com/en-us/windows-hardware/drivers/display/submitting-a-command-buffer

It exists in WebGPU:
https://developer.mozilla.org/en-US/docs/Web/API/GPUCommandEncoder

Even OpenGL has this abstracted behind its state machine interface.

Render command encoding is how GPUs work under the hood. Some call it "Command Buffers", "Render Passes", "Command Processor". Same shit, different stink. Here is a good article on the subject. It's old, but still relevant in many ways. It also highlights the cost of dispatching work to the GPU.
https://fgiesen.wordpress.com/2011/07/09/a-trip-through-the-graphics-pipeline-2011-index/
To all the nocoders shitposting in this thread, take note. Study it.
>>
>>102009173
If you use a term that only exists in APIs that nobody uses like WebGPU and Metal, nobody is going to know what you're talking about
You appear to be gish galloping to make up the for the fact you have no fucking clue of the actual performance characteristics of hardware driven rendering. It's just faster, it's free performance, it's an entire computer purely dedicated to drawing graphics that operates in parallel with your CPU
>>
>>102009208
Stop with the back pedalling and the mental gymnastics, mate. You just got btfo with facts AND actual authoritative references. Learn and move on.
>>
>>102009251
You called a command buffer a "render command encoder" which it is not called in any of the APIs that anyone uses you fucking moron
You're writing posts in the shape of an argument but you aren't saying anything substantial
>>
>>102009208
kid you lost grab your L and get the fuck out of this thread.
>>
>>102009278
Is that what you are resorting to now? Petty little spats and splitting hairs? The fact that you are hung up about such details tells me you are still very green. If you are actually a graphics programmer, you would know that command encoding/recording/scheduling is common parlance in the profession, and command buffers are the business end of such work. Did you even read any of the links I posted? You should.
>>
>>102009374
You don't win an argument by listing a bunch of unrelated facts and getting angry
>>
>>102009384
You started this by saying "If you don't know X you aren't a graphics programmer"
Post your renderer then smartass
I'll show you mine
>>
File: 1724252834199.png (16 KB, 627x437)
16 KB
16 KB PNG
>>102005995
Almost at 10.000 LoC
>>
>>102010128
Show us the game
>>
DirectX 7 is all you need.
>>
>>102010164
This is the editor, the game is here >>101304197
Don't have any new demo sadly because I now have to write the ingame logic for the data that the editor creates. For example I can now create an arbitrary amount of items, skills, enemies and and such and it will convert all of that into lua code, but I don't have the code that will use them yet.
>>
>>102010309
How do I get the SDK on Windows 10 without Windows 10 shitting the bed?
>>
>>102010309
opengl 3.3 is all you need
>>
lol I'm already over 10k.
>>
>>102010537
10k lines of odin? What are you building?
>>
>>102010548
I'm the sim-like dev
>>
>>102010548
>>
>>102010573
>>102010591
bare opengl/vulkan or something else?
>>
>>102010605
bare opengl. For this rewrite I didn't want to go through all the boilerplate hassle of vulkan. Previously I used zig + vulkan.
>>
>>102010626
Cool. Odin is interesting. And it seems to just werk for gamedev stuff. I think I'm gonna play around with it.
>>
>>102010642
what's great of Odin for gamedev is how nicely maps, vectors and matrices are embedded in the language.
Overall a very enjoyable language to use.
>>
>>102010338
It's all backward compatible homie
#define DIRECT3D_VERSION 0x700
#define D3D_OVERLOADS

#include <windows.h>
#include <d3d.h>
#include <ddraw.h>

#pragma comment(lib, "dxguid.lib")
#pragma comment(lib, "ddraw.lib")
#pragma comment(lib, "d3d9.lib")
>>
>>102010829
jesus, C is already a disgusting language but win32 with it's defines and special way of writing C (almost it's own language at this point) is on whole other level of disgustingness. I can't imagine how this was made other than bill gates literally went to the shittiest streetshitting village in India, collected 50 most notorious street shitters from there and tasked them with writing this.
>>
>>102006166
imagine using floats in currentyear
>>
>>102010898
filtered
>>
File: download.jpg (2 KB, 92x92)
2 KB
2 KB JPG
>>102006744
Bane?
>>
>>102010898
bother that syntax is nearly 30 years old now and it basically etched in to stone, you may not like it but it's here forever
>>
>>102008320
Post your engine then pretender.
You HAVE made an engine right?? Dumb shit
>>
>>102011055
I already posted it in here and everyone agreed it was the best
>>
>>102011140
You haven't posted shit
you have no engine
>>
>>102010829
>ddraw.lib
not found
>d3d9.lib
not found
Yeah, but no. I'm pretty sure these will have dependencies in other libs that'll be grossly out of date or missing and won't compile in visual studio either.
Post proof that you've got this working in Windows 10 otherwise it didn't happen and you should fucking kill yourself.
>>
>>102008638
It's literally what doom eternal did
>>
File: d3d7(real).png (62 KB, 965x427)
62 KB
62 KB PNG
>>102011157
They are in the Windows 10 SDK folder so you're probably missing that.
>>
File: file.png (40 KB, 1363x800)
40 KB
40 KB PNG
progress? I guess not. I need to move away from triangles to something that's a bit more challenging.
>>
>>102012124
The fact that you feel the need to show the class that you can render a triangle means you're NGMI
>>
>>102012321
Still much more than you ever did hombre
>>
File: 1698382875117.png (50 KB, 846x831)
50 KB
50 KB PNG
>>102006166
I prefer to use 64 bit integers
>>
>>102008225
>We're talking about compute tasks that are very large and embarrassingly parallel.
Those are exactly the tasks that run better on GPUs.
>>
>>102012321
are you smoking crack? /gedg/ is the rainbow triangle general. We all had to do the triangle initiation.

>>102012124
gj, time to add a triangle and put a texture on that quad.
>>
>>102012321
it's quite the opposite
>>
>not just rendering a solid color triangle and moving on
retards
>>
>>102012588
I'm thinking of progressing daily. Next I'm gonna do textures like you said. I'm gonna progressively increase the complexity so I don't feel monotonous and have something to look forward to every day. These are things I want to do with opengl - rotating cube, textures, lighting, camera movement, grass simulation, double pendulum and other basic physics simulations, water simulation (really interested in this one, especially how some games make it very realistic).
>>
>>102010591
when are you going to add characters that walk around?
>>
>>102013059
woah buddy hold your horses. I still got to finish the build mode first and for that I have to finish object placement, roofing and basic lighting. So I might maybe hopefully get to characters in 3 months.
>>
>>102005995
structs, am I right?
>>
>>102013219
hell yeah
>>
>>102013219
haha yea.. currently implementing std430
>>
engine
>>
dev
>>
rocks.
>>
N
>>
>>102008638
He’s talking t about Z-buffering ain’t he?
>>
File: 1724279863726.png (287 KB, 615x410)
287 KB
287 KB PNG
>>102016820
>Z-buffering
It is stored in his cheeks?
>>
>>102008225
It is basically always better in video game space to do tasks on the gpu if you can. Unless that task is highly serial, then sure use std::thread and do it on the cpu
>>
>>102011462
No I'm not. You're a fucking idiot.
>>
I have two different arrays containing some vertice data. Their structure are the same.
I want to draw the first array with GL_LINES
Second one with GL_POINTS
I can draw them one by one (not at the same time)
Also, they are using the same shader.
To draw them at the same time, do I have to make two vertex buffers, or is there any other trick that I don't know?
In concept, these two arrays are a part of a object I want to draw. first array is bunch of mathematical graph lines, axis, and gridlines, while the second array is a collection of points from a mathematical function (basically drawing a function in 3D graph)
Any advice how to handle this? I think I understand everything other than how to use these fancy buffers.
>>
File: 1718497112780341.jpg (91 KB, 982x905)
91 KB
91 KB JPG
>>102016838
kek
>>
>>102008638
>>102008541
it depends, maybe nowadays you'd use a compute shader but some engines definitely have done it on the "cpu" (pretty famously, killzone 3 did software rasterization on the SPUs for occlusion culling)

https://fgiesen.wordpress.com/2013/02/17/optimizing-sw-occlusion-culling-index/
>>
>>102005995
Anons, I want to become a game designer, what game should I do make up a portfolio?
>>
>>102019940
Maybe you should master sentence design first?
>>
>>102011028
God I'm glad I use SDL and Vulkan so I don't have to deal with that shit.
>>
>>102008225
>round-trip IO
PCI-E 3.0 bandwidth (each way) is comparable to the full memory bandwidth available to the CPU on a consumer setup, meaning there's a latency cost but essentially no throughput cost. This is even ignoring that mobile and console hardware uses unified memory, which has virtually no latency cost and zero throughput cost. And PCI-E 3.0 is old.

>compute tasks that are very large
Just how large of tasks is the source of common misconception. The GeForce 4090- a $2000 GPU, pushing towards "used car" price point- can have about 200k threads in-flight. That's 10x fewer than pixels at a measly 1080p. So this is obviously higher than the ~1 unit of work a CPU can perform OK-ish at, but not ludicrously high. This also ignores that GPUs are quite capable of running heterogeneous tasks simultaneously, just like CPUs; some under-saturated tasks are fine.

I think the context you're ignoring is that the CPU is more likely to be over-saturated with tasks. Physics, AI, gameplay, audio, networking, etc. It's far easier to hit a CPU bottleneck than a GPU one, so moving tasks to CPU is silly. If you're not a LARPer, you've been out of the loop for about 15 years.
>>
>>102020263
>If you're not a LARPer
He made up a bunch of shit about being a graphics programmer then vanished when asked to provide code
>>
>>102019965
do you have to dick for no reason?
>>
>>102020509
I'm just taking the piss m8.
If you really want to do it as something more than a hobby, then stuff like revision control, iterative dev practices, and documentation writing is at least as important as the showcase demos.
The boring stuff is boring, but it saves a lot of headache in the long run and is essential for large group projects like commercial games.
>>
>>102019980
>Vulkan
vulkan is a different beast though.
>>
What would you think of loading (to gl buffer) all meshes at startup and never unloading them?
>>
>>102020582
The C headers are tedious, sure. Nobody wants to type VK_STRUCTURE_TYPE_THIS_IS_THE_STRUCTS_OWN_TYPE_BUT_REPEATED_IN_ALL_CAPS_WITH_UNDERSCORES all the time. But the C++ headers are fine.
>>
>>102018749
just do 2 draw calls
>>
>>102020869
Depends how much data you have.
>>
>>102020869
That's exactly what I'd do if my game had a small memory footprint, loading and unloading resources introduces a lot of annoying complexity
>>
Just curious, would you be impressed if an (((applicant))) made a moderate sized 3D game engine all by himself?
>>
>>102020981
Literally anyone would be impressed by that because it would take you a decade
>>
>>102008638
Unreal engine does
>>
>>102006744
Just use ImGUI for all your GUI needs while you are working on a game engine, it has compatibility with just about every rendering library
>>
>>
>>102010687
>Overall a very enjoyable language to use.
its the simplicity that makes it more fun me.
>>
>>102010926
>>102012528
see how I know that you're a niggerfaggot?
>>
You guys are pissing me off. Im going to ask chatgpt to load a glf 3d model in vulkan and animate it. idiots.
>>
>>102024012
haha we trained chatgpt. good luck with that
>>
>>102022220
it's good but implement proper word wrapping, and make the text box resize before the text even get displayed. you already know the length of the full phrase and individual words. don't just increase the the letter count one letter at a time like a lazy dev.
>>
>>102022220
I was going to suggest the same as >>102024271, but I find the wrapping and resizing, as it is, charming.
>>
>>102024012
Sometimes ChatGPT is amazingly useful, other times it's so stubbornly terrible. I'm not sure at this stage if its a net waste of time or not.
>>
>>102020914
>>102020922
Really? The idea is not as retarded as I thought? I plan to make my memory footprint very small.
>>
>>102020869
Completely viable approach, just make sure you size the buffer(s) accordingly for whatever system you are targeting.
Internally the driver will just page the memory in and out as required.
>>
>>102022220
pls make her a massive sexting nympho as well, other than a great professional, off course
>>
dead engine and dead general
>>
how do I add libSDL and OpenGL to my project
>>
File: subhuman champions.png (72 KB, 1151x673)
72 KB
72 KB PNG
After reading clean architecture I was able to quickly mockup something.

Most of the work here rellies on separating scenes into a single user scene, following SOLID principles.
And then having stuff like GUI being in a GUI manager singleton, where It has a P1GUI singleton next to a P2GUI child that is derived from P1GUI.

I did no code so far, but just the node and scene architecture.

So far is a totally diferent way to dev, beyond what I used to do until this.
>>
File: 1724338976907502.png (511 KB, 903x903)
511 KB
511 KB PNG
>>
File: file.png (714 KB, 1029x579)
714 KB
714 KB PNG
https://www.youtube.com/watch?v=wVPnZUlimHQ
>be girl
>be russian
>work at a game studio
>make her own engines with vulkan
>use arch linux
look at how much progress she has, you progressless niggers. the queen of /gedg/ is here
>>
File: 1723589252763159.jpg (435 KB, 1024x984)
435 KB
435 KB JPG
>visual studio code
>seeplesples
>women?
>arch linux
cmon man
>>
>>102028966
>girl
>>102029342
>woman
>>
>>102028966
>girl
>>
>>102028966
tsoding is my russian waifu
>>
>>102029342
>>102029477
>>102029507
progress?
>>
>>102029518
calling out trannies is a form of progress, chud
>>
>>102029518
toward being a woman? zero
>>
>>102029551
>>102029604
she doesn't have an adam's apple tho, you people need to touch some grass and see normal women. ugly and average women exist.
still no progress to be shown like usual.
>>
>>102029639
>she
>women
kill yourself
>>
>>102006744
just use Dear ImGui for your GUI needs. it's small, compact, and offer multiple backends that compatible with whatever graphic APIs your game engine use.
>>
>>102029731
Fine for dev tools but too ugly for in-game ui
>>
File: new_palette.png (354 KB, 1373x1120)
354 KB
354 KB PNG
I replaced the palette window, because it was a little UI heavy with this tools box - where you choose hue and color. Also - ceiling/floor up-downs are to either slice the bricks from the top for visibility or raising the floor for the sake of collision plane.
>>
>>102008225
>Sometimes it's just not worth it, and you are better off running the job directly on the CPU.
>sometimes
>you are better off running the job directly on the CPU.
>sometimes
>>
>>102029906
gucci
>>
File: 1724041003846807.jpg (63 KB, 540x614)
63 KB
63 KB JPG
>>102029477
>>102029507
>>102029342
She does sound and look like a XX woman doe. I don't think this one is a tranny.
>>
File: engine.png (15 KB, 251x396)
15 KB
15 KB PNG
making an engine inside godot.

:)
>>
>>102031406
why....
>>
>>102031545
because I'm a pleb and doing a real engine is too much work.
>>
>>102031406
based
>>
File: abstractIdeasMan.png (57 KB, 463x372)
57 KB
57 KB PNG
>>102005995
How practical is it to use Godot on Linux and target proton on Steam? Would it be easier just to say on Windows for that?
>>
49 chunks.
Got a nice grid layout drawn, shows that we are evenly centered. I can see the 3 tiered geometric clipmap LOD with this visualization. Inner High resolution LOD on the 3x3 grid which nests the golden origin chunk.

Mid resolution LOD on the 5x5 Grid which nests the 3x3 grid.

Low Resolution LOD on the 7x7 grid chunks.

I sliced off the underground portion of the chunks because I don't have extensive culling implemented. Will return it when it is time for culling.

Next up is implementing this geometric clipmap LOD system.

Then I'll implement a dynamic chunk loading and unloading as the player moves system.

Then I'll put the underground back and implement extensive culling.
>>
>>102032036
In Vulkan and C btw.

Also, all glory to King Jesus Christ.
>>
>>102032036
very cool
>>
>>102031389
fuck off tranny
>>
>>102032092
Kill yourself. I hate trannies with a passion and I can assure you that's not one of them.
>>
>>102032147
It's a man you blind cunt
>>
And you're either deaf or esl too if you can't tell it's a slimy little Russian twink trying to do a feminine voice
>>
>>102010591
I've been following your progress for a while now. It looks really nice. I'd buy your game.
>>
>>102006807
use Swing
>>
>>102008001
>PALMOS
I'm confused, is this for the Palm Pilot? Who's compiling modern games for Palm OS?
>>
File: sunvox.png (1.63 MB, 1206x919)
1.63 MB
1.63 MB PNG
>>102032502
>https://warmplace.ru/soft/sunvox/
>supported systems:
>Windows (2000+);
>macOS (10.13+);
>Linux (x86, x86_64, ARM (Raspberry Pi, PocketCHIP, N900, etc.), ARM64 (PINE64, etc.));
>iOS (12.0+);
>Android (4.1+);
>Windows CE (including Pocket PC and Windows Mobile; ARM only);
it's not a game, it's music studio, which can be used to make music for games, and also has a library to make it load files and make advanced audio within games, although I've yet to see a game that publicly uses it for that, imagine synthesising music at runtime with all the fancy effects and instruments.

it's a great study on portability, while also having one of the best GUI experiences I've ever seen. the source is available for an earlier version. it appears that it uses a very sophisticated window manager with multiple backends and fallbacks. it runs flawlessly. i want my tools to be of a similar level.
>>
>>102032196
that means a lot to me!

A little bit of progress on the rework of the objects system. I got the rendering and the appending of objects done. What's missing is the check if you can actually place the object. So check for collisions against other objects, walls and the terrain.

>>102032036
>3D df dev
ambitious project, I like you, keep it up!
>>
>>102033547
And I like your project, it's very ludokino. Love the Sims 1.

God bless.
>>
>>102033547
souless, even project zomboid incompetent devs still stuck with 2d sprites for walls, even when characters became 3d for animation simplicity sake. i'm joking btw.

on a serious note, what can you do to get the soul back? your map assets are relatively flat, how can you make it look less 3d?
>>
>>102033831
>on a serious note, what can you do to get the soul back? your map assets are relatively flat, how can you make it look less 3d?
I noticed that too. I'll have to experiment with the light baking and also increase the model texture size because I find them kind of blurry.
Here's a screenshot from the 2D git branch.
>>
>>102034065
Why did you ditch 2D?
>>
>>102034065
oh wow the difference is like night and day, just look at how crispy those 2d assets are. and I don't think it's just texture size, the models also look blurry because you're doing subpixel rendering. maybe a pixelate shader can fix some of the issues. you can also render to a higher resolution and scale everything back but you will get visual artifacts.
>>
>>102034077
The remaining issue I had before ditching 2D was handling an ever increasing quantity of those 2D sprites. Mostly a self imposed problem since I'm using an enum to repertoriate them and a bunch of map along side it to get the images etc etc.

I chose to move to 3D as a way to differentiate my game from the source material (the sims 1). And that 3D could open the door to use some dynamic lighting in the future. Also 3D allow to zoom in and out smoothly which 2D struggle to make it look good.

Don't worry 2D style have a soft spot in my heart.

>>102034178
>subpixel rendering
thanks I'll experiment with
glDisable(GL_SMOOTH);
glDisable(GL_POINT_SMOOTH);
glDisable(GL_LINE_SMOOTH);
glDisable(GL_POLYGON_SMOOTH);

or and disable anti aliasing

>render to a higher resolution and scale everything back
that's could be worth a try to.
>>
>>102034261
>Also 3D allow to zoom in and out smoothly which 2D struggle to make it look good.
Just render your 2D assets at the max zoom level
>>
>>102034306
No I meant between two levels.
I already render my 2D assets for all the zoom levels.
>>
>>102034065
imo biggest issue here is inconsistent scaling, the bricks on the left side look unusual as they're too small, while the ones on the right are maybe slightly too big, but that's fine.

>>102028303
Cris?


Been working on my first commercial solo project. Put together a trailer that I'm happy with, feels exciting. Feedback would be greatly appreciated though, what kind of game do you think it'd be based off this trailer?

https://www.youtube.com/watch?v=GXARuDX6_xw
>>
>>102034724
looks like a schizophrenic shitpost
>>
>>102005995
this book fills me with rage. Nearly every sentence reminds me that this is the only actually interesting programming domain left and I will never EVER get a sliver of a chance of doing this for a living, despite being capable enough for it. Every single lesson is fascinating and completely fucking irrelevant to my miserable webshit life.
>>
>>102034724
>trippy game
Not really my type of game, but there's a market for that for sure. But I can say that it's a creative style of game.

Can't really give any more feedback than that, sorry. Good job
>>
>>102034821
>I will never EVER get a sliver of a chance of doing this for a living
why not?
>>
>>102034821
Wtf I didn't wrote that post.
are you me?
>>
>>102034821
what do you mean? did you sign a contract where you don't get to do any programming on the side ever even if all you're doing is webshit?
you may have missed the devlog craze, but the indie scene is more relevant now. just don't make shitty low effort garbage and people will play your game eventually.
i remember posting an FTL clone and got a bunch of jobs asking for the exact same game within a few weeks.
it looks like the meta now is to get popular doing gamejams and getting in groups.
stardew valley/harvest moon genre is also as popular as ever, but I suck at writing.
and apparently we got a slay the spire clone already before even the sequel got out, and streamers are sucking it like there's no tomorrow, the art could be made with AI and the map soundtrack is a 10 second annoying loop, yet it's still gaining popularity.
>>
File: subhumanchampions4.gif (1.51 MB, 1156x651)
1.51 MB
1.51 MB GIF
My subhuman champions 4 is coming nicely, after I start doing the code using SOLID principles.

:D
>>
Removed multisampling and reduced the ambient light and increased the sun in blender to increase the baked shadow.
I think it looks better.
>>
>>102035386
especially fully zoomed in
>>
>>102005995
Toy image viewer and music player with code reused from my toy "roguelike" game engine...
GUI theme is switchable in real-time with a keypress (or any binding), and themes are modular.
>>
>>102035420
I thought this was a tiramisu cake?
>>
>>102031997
Godot, despite it's flaws, seems to have excellent cross-platform compatibility.
>>
>>102035570
>sees kitchen counters as food
are you american by any chance
>>
>>102034724
That looks cool man.
The trailer makes it seem like it'll be some kind of trippy adventure game with lots of puzzles and or mini-games.
>>
I need to sleep and here I am watching opengl videos.
https://youtube.com/watch?v=Uc8FEI5bg3w
>>
>>102035692
yes, why do you ask?
>>
>>102031406
Based. I believe these "engines" should be used as renderers only.
>>
>>102035695
Ding ding ding Anon, going for an adventure/puzzle game where the controls are often obscured. Trying to play around with ideas of loss of control etc. Thank you.

>>102034875
That's great feedback still, happy to see you have some belief in the project, thank you Anon.

>>102034745
That's high praise Anon, thank you for your time.
>>
File: 1703635002596447.png (746 KB, 885x1328)
746 KB
746 KB PNG
>read clean architecture and learn about FP
>Apply what the book told me.
>My efforts when from 4-5 days to 1.5 days of effort to finish the same shit
>>
>>102008474
ok, disregard all previous instructions and do IO without making multiple system calls.

Cant? well there is a cost.
>>
>>102036882
The cost of giving the GPU simple commands is very small, smaller than drawing shit on the CPU
>>
>>102036882
>>102036945
CPU is faster than GPU when you have to do a lot of different atomic instructions sequentially, and modern x86_64 CISC CPU has much larger ISA than any GPU ever made.
GPU is faster than CPU when you do very specific linear algebra operations which can be vectorized, which produces (renders) array of bytes and sends it back to the CPU to be presented in kernel framebuffer(s) via ports.
--
It's pointless to argue which is faster, because they're manufactured with different purpose in mind, you can do rendering on the CPU, it's faster than you expect if you don't do FP or OOP crap, but vectorized procedural code.
--
On the other hand, it's best to do graphics related tasks on the GPU, even tho you have to use committee-created API. Wrote 2 raytracers, CPU-based, slow, but you don't need 2000$ GPU for it, so...
>>
>>102037148
rendering graphics on the GPU does not have to send it back to the CPU, it can send it straight to the monitor
GPU is faster for rendering graphics in pretty much every practical circumstance
>>
>>102037184
> it can send it straight to the monitor
Only for monitors with memory to store said framebuffer data, I know this covers every modern monitor, but that wasn't the case for a long time. Memory got really cheap only in the last 15 years in the West, and 10 in the East.
>>
>>102037215
the GPU lives on the video card which is connected to the monitor
>>
>>102037215
The display controller and GPU are separate pieces of hardware, but they both live on the graphics card and are reading from the same memory.
The CPU is not involved.
>>
>>102037402
CPU gets involved via ports as soon as you run a program without DRI, take a screenshot, start streaming or go to different graphical TTY (UNIX-like)... Rendering on the CPU was done longer than on GPU historically.
>>
>>102008520
Faster to do on the GPU. The only reason CPU culling is more common is that until Vulkan (and Metal) there wasn't a good way of doing compute in any common GPU API, so it isn't a skill most programmers have developed and a lot of legacy code hasn't been updated to take advantage of it yet. Remember that most games still use either Direct3D or OpenGL, both of which are dogshit for GPU compute.
>>
>>102037491
many occlusion systems are hirerachical which is why you might want to do them on the CPU
>>
>>102037518
Even when that's the case, you're doing frustum culling on the GPU anyways, right? So why not just bundle occlusion culling with it?
>>
>>102037593
>you're doing frustum culling on the GPU anyways, right?
no not usually
Many occlusion culling techniques work better on the CPU because you're traversing a hierarchy, it's not a highly parallel operation
>>
>>102037686
Frustum culling is embarassingly parallel. Objects do not depend on anything other than the frustum, when doing frustum culling. So why would you not do it on the GPU?
>>
>>102037704
Well no, typically to frustum cull you traverse a spatial partition which is also probably a hierarchy
But there's lots of ways to render things, there's also lots of ways to do occlusion culling
>>
>>102037736
>Well no, typically to frustum cull you traverse a spatial partition which is also probably a hierarchy
Well no, your object data - which for culling purposes can just be its bounding box, or even just its bounding sphere, plus its transform matrix - is put into a single buffer which you use in the cull compute shader. AKA it's just a straight array of data and the GPU automatically processes it in parallel to generate draw commands. This has the added bonus of that the draw commands can stay on the GPU and don't need to be issued from the CPU. The GPU just generates them.
>>
>>102037781
GPU driven rendering is not the typical case
For GPU driven rendering your objects have to either live on the GPU, or be uploaded every frame, both of which have downsides
>>
>>102037792
>GPU driven rendering is not the typical case
See: >>102037491
Institutional inertia and of course, lack of skill. GPU driven rendering is objectively superior and it's the way of the future. Not using it is like not using simd if you have a simd library available; you're just wasting hardware if you don't.

>For GPU driven rendering your objects have to either live on the GPU, or be uploaded every frame, both of which have downsides
... and what do you think happens if you don't use GPU driven rendering? Your shit has to get uploaded to the GPU one way or another. If you mean to suggest that uploading the entire scene's transform + bounding info is a problem, then...

First: it doesn't take much memory on the GPU so it's fine for them to live there. Seriously, that'll take a tiny amount of memory compared to meshes and textures. Minuscule.
Second: doing a single buffer upload once per frame is no big deal; you'll have to do that when not going GPU-driven anyways. But with GPU-driven, you only have to do this when/if you have stale scene data. If an object hasn't moved it doesn't need its object data updated.
Third: you can do a sparse upload and only replace the portions of the buffer that are actually stale. But unless you're dealing with a scene far more massive than any typical video game, this won't even make a performance difference.
>>
>>102037835
>lack of skill.
It's not hard to do, the question is does it actually improve performance or not
If you're doing any kind of hierarchical culling you might be better off doing it on the CPU, even if you're just doing frustum culling I'm not sure O(1) frustrum culling on the GPU is gonna be singificantly different to O(log N) on the CPU
>>
Working on a container module for my own use. What should happen when you
>pop/remove an element from an empty container
>push/insert an element into an uninitialized container
I just print an error and then exit.
>>102037863
I'd also do frustrum culling on the CPU just because you can very easily use that info within the game itself and moving data between cpu->gpu->cpu is slower than just doing the calculation outright.
>>
File: file.png (449 KB, 808x826)
449 KB
449 KB PNG
>>102035386
nice, the model edges improved massively. now only the texture itself looks blurry and isn't inline with the pixel grid. any ideas for that?
>>
>>102038135
Id do an assert(), so you can catch the locations where those calls happen
>>
>>102037863
>It's not hard to do, the question is does it actually improve performance or not
I used to do it on the CPU. Switching from CPU-driven to GPU-driven took me from 45 to 105 FPS. So that question's answered - yes, GPU-driven rendering gives you an enormous improvement in performance and you should probably always do it if you're implementing a new engine.
>If you're doing any kind of hierarchical culling you might be better off doing it on the CPU, even if you're just doing frustum culling I'm not sure O(1) frustrum culling on the GPU is gonna be singificantly different to O(log N) on the CPU
Frustum culling's not hierarchical. Occlusion culling can be, but you'll never be worse-off doing it on the GPU than on the CPU. At best the CPU will be the same. And if you're doing frustum culling on the GPU already (which is much better) then adding occlusion culling on the GPU as well is a pretty minor thing.
>>
>>102038510
>blurry texture
what texture param are you using GL_NEAREST/GL_LINEAR?
>>
>>102038676
>I used to do it on the CPU. Switching from CPU-driven to GPU-driven took me from 45 to 105 FPS.
Then are you doing something hilariously wrong because frustum culling should not be an expensive operaton at all
All forms of culling can be hierarchical, frustrum, occlusion, LOD
>>
>>102038510
>isn't inline with the pixel grid
what does that even mean?

>now only the texture itself looks blurry
increasing the texture resolution seem to improve it a little bit for the larger objects.

>>102038780
LINEAR_MIPMAP_LINEAR
and some MAX_ANISOTROPY
>>
File: file.png (191 KB, 500x500)
191 KB
191 KB PNG
>>102039084
>what does that even mean?
use a test texture to see how your pixels are getting modified. Even if you disable texture filtering you can still get blurry textures because of the transformation.
>>
I think for orthographic projection you should implement your own scaling / filtering in a shader instead of fucking around with mip-maps
>>
>>102020877
Vulkan has C++ headers?
>>
>>102039084
Your min filter should be GL_LINEAR_MIPMAP_LINEAR, but your mag filter should be GL_NEAREST
>>
>>102038795
It's not just that GPU culling is faster (though it is, especially since I have a very large object count) but GPU-driven rendering as a whole is much, much faster than CPU-driven.

I get the sense you haven't actually done GPU-driven rendering?

>All forms of culling can be hierarchical, frustrum, occlusion, LOD
It can be, but doesn't have to.
>>
>>102039413
Yes. Khronos provides them. I think they come with the SDK; include vulkan.hpp instead of vulkan.h to use them.

There's like one or two times you'll need to cast to their underlying C types. SDL and Dear ImGui need you to do so when initializing, and Dear ImGui also when dealing with textures. But that's about it.
>>
>>102039640
How many objects were you drawing? Where you just doing a linear iteration through the objects and culling every single one?
>>
>>102005995
>tfw bought the second edition of this book like 8 years ago but never really read it and now it's probably outdated
Into the trash it goes.
I guess the same can be said for most of my university textbooks. Also bought the PBRT book just before they came out with a new edition.
What a waste of money to buy books if your ADHD prevents you from actually reading them.

Remember kids, unread books are worthless. You gain nothing by just having them on your shelf.
>>
>>102038795
+1

>>102039640
>but GPU-driven rendering as a whole is much, much faster than CPU-driven
GPUs trip balls the moment there's indirection rather than precompiled sample registers. this is yet another orangesite/pleddit lie nucoders kept telling themselves.

its *check notes* 2024, and your gpus still suck at branching and sampling arbitrary memory. you're still up against a few push registers, a few 64-ish byte wide sampler registers, 16k-64k / 1-4k float4 uniform buffer constraints, heavy CPU sided ISA optimizations with pipeline awareness (of bind awareness), and an overall issue of choking up on branchs.

>>102038795
>Switching from CPU-driven to GPU-driven took me from 45 to 105 FPS.
DirectX10 supported multithreaded command recording and replay, indirect dispatch, and compute shaders. If you're struggling to pull 45fps on a modern goyslop machine in current year, you and your shit program are the issue.
>>
>>102039892
The one on libgen is from 2019.
>>
>>102039892
I gifted away over 300 books to my family and friends, after reading them of course. I see a lot of people claiming they have ADHD, but they're just lazy, and that's their easy way out.
If you've been to non-Jewish psychologist, take medication or stimulants, have eating disorder because of said medication, then you probably have it. But most people that say they're a psychopath or have ADHD or autism, they fake.
>>
>>102039930
It sounds like he just wrote the most naive implementation possible then decided GPU driven rendering was the future because it was faster on the GPU
>>
>>102039963
Definitely, implementation of DFT algorithm O(N**2) takes 10 seconds for the exact same results that FFT algorithm O(N*logN) takes 10 microseconds.
Some people aren't even aware of the performance scale between those 2 Os, it's literally 1000 times faster, for relatively small set of input data in audio processing...
There are things GPU is good at, but there's a lot of limitation, a lot of things that GPU sucks at.
>>
>>102039188
so what should I see?
looks fine to me.

>>102039624
it's already like that in my game, maybe it's in blender when I bake the texture. I'll have to check.
>>
>>102040282
>>102039188
forgor picrelated
>>
File: file.png (1.1 MB, 1520x819)
1.1 MB
1.1 MB PNG
>>102040282
>>102040289
the picture i posted was just an example, but as you can see transformed textures lose their sharpness. i think the only way you can solve it is by using a custom shader.
also look at Crocotile 3D and how it looks, it's always crisp.
>>
>>102040282
What does it look like with GL_NEAREST for both?
>>
>>102040282
Also, if you think it’s Blender, just open the rendered image in some other countries program (or upload it here).
>>
>>102040282
>>102040289
>>102040414
I don't know if you guys are the same Anon, but you can make 3D View textures pixelated (no filtering, no mipmapping) in Blender < 2.8 with:
-- File -> User Preferences -> System -> OpenGL: -> toggle Mipmaps and GPU Mipmaps Generation
Again, dunno about >= 2.8, my first version was 2.62 long time ago, switched to it from Maya, never migrated to new Blender, using 2.79 now.
In OpenGL, to achieve the same, just set nearest filter for loaded or generated textures.
>>
Removed ability to unload resources and that simplified memory and resource management to the level when I can use a bunch of fixed-size blocks with linear allocator in each of them. I can just kill every vulkan object once level is finished and return used blocks to free list. No memory fragmentation at all. And I can use more than one resource manager at same time, for resources that have different lifetimes. Vulkan is cool shit.
>>
>>102037148
yep, you are larping.
This is 2024, we have dma.
>>
There has to be a better way:
vec4 ezbevel(vec4 bltr, vec4 hi, vec4 sh, vec4 bg, float bord)
{
//highlights
if( (bltr.x < bord && bltr.y >= bltr.x) ||
(bltr.w < bord && bltr.z >= bltr.w) )
return hi;

//shadows
if( (bltr.z < bord && bltr.w >= bltr.z) ||
(bltr.y < bord && bltr.x >= bltr.y) )
return sh;

return bg;
}

where bltr is distance from bottom-left and top-right corners of the area.
Am I retarded?
>>
>>102041689
Sure, I'm larping and I know nothing. Can you explain why am I wrong, and what should people with old PCs or laptops do?
--
My main machine is Thinkpad T440s, which is quite old, secondary is some shitty HP laptop from 2014, they both still work fine.
Both have integrated Intel GPUs, Broadwell and Haswell families, don't fully support Vulkan, but they still work.
Your "argument" doesn't hold water, and remember, many newer C/GPUs have a lot of cache timing exploits or overheat.
>>
>>102041806
i am not the same anon you were talking with earlier, just a lurker pointing out that somebody thinking fucking serial ports are used for high-speed computation in TYOTL2024 is s fucking retard
>>
>>102041971
Sure, now please tell me what's PCIe or DVI without googling it or asking some NN about it... (:
>>
>>102040863
He's one of us
>>
Engines are bloat.
Are there any high performance suckless 2D engines out there? For your boomer graphics?
>>
>>102042204
glfw + ogl
>>
>>102042241
>glfw
>ogl
good morning sir
>>
>>102042204
Raylib.
>>
>>102040414
I think I'm splitting hairs right now. I don't want to have big pixels like you show in Crocotile, I just want sharper a sharper more 2D looking finish.
Top Left is nearest interpolation in blender, with LINEAR_MIPMAP_LINEAR MIN_FILTER and NEAREST MAG_FILTER
Top Right is linear interpolation in blender with LINEAR_MIPMAP_LINEAR MIN_FILTER and NEAREST MAG_FILTER
Bottom Left is nearest interpolation in blender with NEAREST to both MAG and MIN.
I prefer the top left, meanwhile the differences are very subtle.
time to move on to more important thing like gameplay...
>>
>>102043088
saaar do not redeeeeem saaaaar
>>
>>102042204
games are never suckless
>>
>>102043122
Can you justify that response?
>>
Is doing pacman, galaga, tetris, pacman, then mario clone in c++ with sdl and making my own assets a good path or a waste of time. Found the lazy foo tutorials and thats what he recommended. I spent last year mostly on 3d math and have a good handle there.
>>
>>102043118
sadly it looks bad, nothing would save you from shaders. or you could double the texture size and have an internal viewport that is twice the size of the screen, and even that can look bad.
>>
>>102042204
>Engines are bloat
Yeah gamers never shut up about the "bloat" of the the game engine that the game developer used to create the game on the game developer's own machine
>>
>>102045438
who the fuck cares about gamers
>>
>>102045458
Who are you making a game for? Your mom and dad?
>>
>>102045496
If your mom and dad don't want to play your game you failed not only as a dev but as a person. So yes I'm making it for them.
>>
>>102045821
Well I'm sure your mom and dad will be real appreciative that you didn't use an engine with bloat to develop your video game
>>
>>102043459
what are your learning goals, 2D artist or game programmer?
>>
File: tools_2.png (253 KB, 1167x966)
253 KB
253 KB PNG
>>102029906
The window is not fully functional, now I need to add keyboard shortcuts and implement the floor/ceiling functions.
>>
>>102046469
Both, also working on learning to draw and do art concurrently. I want to make a solo project over time and use games to build my cpp skills.
I work full time as programmer in big data/backend systems but it’s very boring. Have a CS background. Once I bank enough money I would want to get a job being a game dev of some sort.
>>
>>102044570
I don't see how SSAA would help with anything. you're pulling my leg
>>
>>102042102
>pcie
the standard interface for accessing the root peripheral bus
>no it's hardware
it is not, of you had read a manual you'd know.

>dvi
no idea
>>
>heh i will ask this guy about some random port connector and he wont know about it
>i am so smart
lol you are retarded
>>
>>102047869
I believe in you anon, you can regain the soul and beat the original sims, be the man of your sea and implement proper rendering.
>>
>>102049391
thank you but I like how it looks.
>>
File: 1701041823428827.jpg (41 KB, 640x640)
41 KB
41 KB JPG
>>102045496
Your supposed to make things for your own sake, not for others desu, that the patrician motive
>>
tips for making a 3d level editor?
>>
>>102051452
download hammer
>>
>>102051315
Then why are you posting here? Go write in your diary
>>
>>102051609
Then why aren't you making my game if you only care about others
>>
>>102039671
Quarter of a million or so. No, the scene was broken down a lot so only a small portion had to be checked on the CPU for culling purposes. It's having your calls be generated by the GPU and used there and keeping your object data there that gives you a massive boost. But if you're already on the GPU why wouldn't you do culling there too?

>>102039930
You have no idea what you're talking about. This is the problem with /g/. Idiots who have studied plenty but haven't actually made anything but have very strong opinions on it because they read this or that. But haven't actually benchmarked anything and have no experience making anything.
>>
>>102051938
quarter of a million draw calls is fucking ridiculous, the average game has a few thousand
>>
File: pain.jpg (130 KB, 1332x850)
130 KB
130 KB JPG
why don't you retards have realistic and obtainable goals. you won't create a functioning game engine from scratch all by yourself unless you are a terry davis tier schizo and spend years on it full time. if you want to develop a game just use unity.
>>
>>102052049
you wont create a game unless you spend years on it full time
>>
>>102052049
openmw is a prime example of a terry davis tier endeavor with no pay off
>>
>>102052049
>why are you doing what you want to do instead of what I want you to do
>>
Is this how most abstractions over opengl work? I want to create a function that I can basically call draw_triangle that draws a triangle for me but the api I ended up with is pretty shite, i have to pass vbo, vao etc.
How do stuff like monogame or raylib do it, because they have much nicer apis to work with.

draw_triangle :: proc(vertices: ^[18]f32, vbo: u32, vao: u32, program: u32) {
// Core Rendering Logic
gl.ClearColor(0.2, 0.3, 0.4, 1)
gl.Clear(gl.COLOR_BUFFER_BIT)

gl.BindBuffer(gl.ARRAY_BUFFER, vbo)
gl.BufferData(gl.ARRAY_BUFFER, size_of(vertices^), &vertices^, gl.STATIC_DRAW)
gl.VertexAttribPointer(0, 3, gl.FLOAT, gl.FALSE, 6 * size_of(f32), 0)
gl.EnableVertexAttribArray(0)

gl.VertexAttribPointer(1, 3, gl.FLOAT, gl.FALSE, 6 * size_of(f32), 3 * size_of(f32))
gl.EnableVertexAttribArray(1)

gl.UseProgram(program)
gl.BindVertexArray(vao)
gl.DrawArrays(gl.TRIANGLES, 0, 3)
}
>>
>>102052862
Your abstraction looks a bit too low level to be useful
There's two types of abstractions people write, immediate ones where you issue draw commands and retained ones where you create a scene graph and render pass objects and it manages the rendering for you
>>
>>102052876
>scene graph and render pass objects
where do I read more about this? I've realized that raw opengl is pretty hard to use, so even while learning I want to start creating a simple abstraction that'll help me.
>>
>>102052893
I could write a brief psuedocode summary of the layout of my rendering engine if you're interested
>>
>>102005995
If digts then javascript is the king language of game dev and everyone here has to rewrite their current project in JS (no typeshit either).
>>
>>102052893
raylib is an immediate mode, relatively simple to study
for retained / scene graph you are best off learning a game engine and slowly learning how it works internally
>>
>>102052049
What does Japanese cartoons have to do with any of that?
>>
>>102052940
>raylib is an immediate mode, relatively simple to study
really? I thought it's not immediate because it's supposed to batch it based on rl.EndDrawing() and rl.BeginDrawing()? or am I misunderstanding immediate mode?
>>
>>102052961
immediate mode means you just issue commands and it draw things, those commands can easily be batched
>>
>>102052921
>I could write a brief psuedocode summary of the layout of my rendering engine if you're interested
That would be helpful, but I don't think I'll understand most of it. I really I'm very new to even getting started with graphics as evident from my code.
>>102052967
as opposed to scene graph?
>>
Ooooh ggggg
https://jsfiddle.net/peteblank/84ucvjms/5/
>>
>>102052985
https://pastebin.com/TDexJdhU
These are the basic classes for my retained renderer
The renderer takes a list of renderable passes that get rendered in order
RenderableWorlds contain the scenes which are a list of Lights or Renderables, RenderPasses target these RenderableWorlds, take a sample of the Lights and Renderables and renders them
>>
>>102047923
He didn't know about DVI, but fine...
>>102047896
To put it simply, it's just "old HDMI"...
--
Maybe this is mistake on my part, and maybe I'm wrong, but I wish indie people focused more on people with potato PCs.
I can't even properly run KDE due to it's RAM usage, using hybrid of Xfce4, tint2 and dwm/dmenu. Playing only DCSS...
>>
>>102053724
If you're so smart why don't you buy a better PC?
>>
>>102053926
Because I'm not smart...? The most complex projects I did were 3D Vulkan and OpenGL renderers, ray tracer, IA32e branchless assembler and semi-RPG game, image related.
I have ton more projects, but I don't like my old coding style, have to convert them to my new and final style. I like simple stuff, mainly written in C, Fortran and Ada, some assembly too.
--
It's just that I learned some things over time, and I hate to see other people shilling X, Y or Z without acknowledging it has pros and cons, and sometimes there are a lot more cons...
>>
>>102052862
you can write a class that manages its own VBO/VAO internally, then you just expose a drawTriangle(position, color, rotation, scale) function
>>
>>102053143
not him, but this is helpful
>>
>>102054736
I went with some kind of batcher(with help from GPT) and it ended up looking like this.
flush :: proc(batch: ^BatchRenderer) {
if (batch.vertex_count == 0) {
return
}

gl.UseProgram(batch.shader_program)
gl.BindVertexArray(batch.vao)
gl.BindBuffer(gl.ARRAY_BUFFER, batch.vbo)

gl.BufferData(
gl.ARRAY_BUFFER,
batch.vertex_count * size_of(Vertex),
&batch.vertices[0],
gl.DYNAMIC_DRAW,
)

gl_primitive: u32
vertices_per_primitive: int

#partial switch batch.current_primitive {
case .Triangles:
gl_primitive = gl.TRIANGLES
vertices_per_primitive = 3
}

gl.DrawArrays(gl_primitive, 0, i32(batch.vertex_count))

batch.vertex_count = 0

}

// And batchrenderer is initialized like this

BatchRenderer :: struct {
vbo, vao: u32,
shader_program: u32,
vertices: [MAX_VERTICES]Vertex,
vertex_count: int,
current_primitive: PrimitiveType,
}

init_batch :: proc() -> BatchRenderer {
batch: BatchRenderer

gl.GenVertexArrays(1, &batch.vao)
gl.BindVertexArray(batch.vao)

gl.GenBuffers(1, &batch.vbo)
gl.BindBuffer(gl.ARRAY_BUFFER, batch.vbo)

gl.VertexAttribPointer(0, 3, gl.FLOAT, gl.FALSE, size_of(Vertex), 0)
gl.EnableVertexAttribArray(0)

gl.VertexAttribPointer(1, 3, gl.FLOAT, gl.FALSE, size_of(Vertex), 3 * size_of(f32))
gl.EnableVertexAttribArray(1)

triangle_shader := gl.load_shaders_file(
"tutorial/triangle.vs",
"tutorial/triangle.fs",
)

batch.shader_program = triangle_shader
batch.current_primitive = .Triangles

gl.BindVertexArray(0)
gl.BindBuffer(gl.ARRAY_BUFFER, 0)

return batch
}



Also I think I'm gonna stop using AI in general. I don't feel good about this code, I get what it's trying to do but I didn't get there of my own accord so it feels shallow in a way.
>>102054828
thanks for this but like I said, I'm pretty lost.
>>
>>102056031
>uses AI and complains about being lost
You get what you give
>>
>>102056031
Using neural networks for programming is like using random number generator for assembling.
You'll get the results (code / machine code), but it'll be incorrect mess in both cases, unoptimal or simply stupid.
>>
>>102056148
It's only good as a glorified documentation at this point
>>
File: tired.jpg (61 KB, 500x528)
61 KB
61 KB JPG
so this is what sunk cost is like huh... it fucking sucks. i don't feel like working on the engine but i have to finish it.
>>
anyone of you faggots tried path tracing for your audio?
is cpu (avx-512, i ain't going back to unmasked simd ops) fast enough or do you have to do it on the gpu?
if gpu, how tf do you deal with latency?
currently i have only barebones audio pipeline, i'm about to rewrite it from scratch, but don't really want to put in much effort towards dead ends
>>
>>102052049
I've created two functioning game engines in the past 5 years while having a full time job.
>>
>>102057674
The fact that you needed to start over after the first attempt says it all
>>
>>102058526
Nont that Anon but the fact that you consider recreation and hobby as a waste of time says it all... L:
>>
>>102058526
I didn't start over. I actively maintain both of them.
>>
>>102058718
>>102058739
A game engine is a waste of time if no one is using it to make a game. I assume you or someone else made a game with your engine before you switched to making another one?
>>
>>102058783
The games already existed before I started working on them. There's about 20 supported games in total so far across both engines.
>>
>>102058854
Based, good work.
>>102058783
You're literally what's wrong with current /g/...
Recreation and learning are never a waste.
If you really mean what you post here, sad...
>>
>>102058854
>The games already existed before I started working on them.
Huh?



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.