He's got a point.
>>107083562they encode a very wide range of values with good enough precision for many applications using relatively few bits
>>107083562>tell me you (only know computer science and) don't know scientific computing without telling me you don't know - the memelol
>>107083562Because without floats in 3D rendering you either have PS1-style wobbly polygons (fixed point arithmetic) or have to use some sort of lossless decimal structure (comparatively slow as fuck).
>>107083562>binary data was not supposed to have decimal partssomeone who can't use "decimal" correctly should not be making a meme in this format
>>107083961why 3D specifically? why would rendering in 2D or 4D or 512D be any different?
>>107083961I'm pretty sure it's the affine transformations and lack of Z-buffer that cause wobbly polygons on the PS1.As a counter example, Doom I & II use 16.16 fixed point and don't suffer from jitter.
>>107084028Doom does suffer from jitter due to fixed point arithmetic, although it's usually controlled pretty well.
>>107083961Fixed point *is* the solution to the pixel-snapping problem. All modern triangle rasterizers are fixed-point.
>>107083562I don't know, that seems like something a compiler needs to worry about not me.
>>107084064>>107083961which part of the pinhole projection model is it that suffers from fixed point arithmetic, exactly? can you explain?I work with 3d-to-2d projection all the time and I can't for the life of me see why it would matter.
>>107083562Except computers were invented for floating point calculations.They have scientific and military uses.Computers were not invented for spying on the population, which is what IBM used them for with their chars and integers.
>>107084145https://doomwiki.org/wiki/Wall_wiggle_bug
>>107083562i don't even understand floating points that well so I certainly am not going to play contrarians
>>107084185Doesn't sound like an issue inherent to fixed-point.It's caused by a polar coordinates creating a singularity (e.g. divide-by-zero), made worse by using a look-up table to calculate the angle which discards the lower 19 bits of the input angle.The fix linked in the first reference does the exact same math, but the look-up table is replaced with actually calculating tan(), thus preserving all 32 bits of precision.
>>107083562>He's got a point.A floating point, that is!
>>107084185this doesn't look like something that would happen with modern triangle/quad rendering and tessellation
Ever heard of normalized coordinates
>>107083562the real question is is there any point to the new meme learning low precision floating point formats like bfloat16, float8, and bfloat8 outside of AI?they're supposed to be more accurate or something compared to halves?
>>107086661>are low precision types supposed to be more accuratebased retard. No, it's just memory usage. AI tards cannot into efficient code, so they have to bloat hardware memory instead.
>>107086705>reading comprehensionmore accurate as compared to the (at least on gpus) very widely supported low precision type, which i mentioned, 16 bit IEEE floats i.e. halveswhich happen to be the same size as bfloat16
>>107083562>loses accuracy at higher numberswhat's the fucking point here exactly? why would you want to break incrementing big numbers?
>>107083961usecase for 3d?
>>107083562OP is retarded.
Just a reminder that Chuck Moore made processors (greenarrays) with his own CAD software (okad) entirely with his own forth and he used no float point, only fixed point arithmetic.
>>107083635I good enoughed your momI'm sure you won't mind, enough.
>ctrl + f>posits>0 results
>>107083562fixed point arithmetic doesn't solve the 1/10+2/10 problem btw
>>107084166>Computers were not invented for spying on the populationheh
>>107086661bfloat16 is literally just the upper half of a single precision float, so the only advantage is half the storage space at a cost of 70% of the precision.It's still accurate to ~2 decimal digits (enough for percentages), and it's trivial to convert back to a single precision float, so it covers a decent number of use cases.fp8 and fp4 are pretty much just for ML, yeah. They have shit accuracy and range, which only really works well when most of the numbers are normalized and you have literally billions of them to diffuse the error out over.
>>107087645If you want that, you want either decimal or rational math. They have their downsides.