[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: floats.jpg (179 KB, 1080x1076)
179 KB
179 KB JPG
He's got a point.
>>
>>107083562
they encode a very wide range of values with good enough precision for many applications using relatively few bits
>>
>>107083562
>tell me you (only know computer science and) don't know scientific computing without telling me you don't know - the meme
lol
>>
>>107083562
Because without floats in 3D rendering you either have PS1-style wobbly polygons (fixed point arithmetic) or have to use some sort of lossless decimal structure (comparatively slow as fuck).
>>
>>107083562
>binary data was not supposed to have decimal parts
someone who can't use "decimal" correctly should not be making a meme in this format
>>
File: brainlets.png (123 KB, 733x380)
123 KB
123 KB PNG
>>107083961
why 3D specifically? why would rendering in 2D or 4D or 512D be any different?
>>
>>107083961
I'm pretty sure it's the affine transformations and lack of Z-buffer that cause wobbly polygons on the PS1.
As a counter example, Doom I & II use 16.16 fixed point and don't suffer from jitter.
>>
>>107084028
Doom does suffer from jitter due to fixed point arithmetic, although it's usually controlled pretty well.
>>
>>107083961
Fixed point *is* the solution to the pixel-snapping problem. All modern triangle rasterizers are fixed-point.
>>
>>107083562
I don't know, that seems like something a compiler needs to worry about not me.
>>
>>107084064
>>107083961
which part of the pinhole projection model is it that suffers from fixed point arithmetic, exactly? can you explain?

I work with 3d-to-2d projection all the time and I can't for the life of me see why it would matter.
>>
>>107083562
Except computers were invented for floating point calculations.
They have scientific and military uses.

Computers were not invented for spying on the population, which is what IBM used them for with their chars and integers.
>>
>>107084145
https://doomwiki.org/wiki/Wall_wiggle_bug
>>
>>107083562
i don't even understand floating points that well so I certainly am not going to play contrarians
>>
>>107084185
Doesn't sound like an issue inherent to fixed-point.
It's caused by a polar coordinates creating a singularity (e.g. divide-by-zero), made worse by using a look-up table to calculate the angle which discards the lower 19 bits of the input angle.
The fix linked in the first reference does the exact same math, but the look-up table is replaced with actually calculating tan(), thus preserving all 32 bits of precision.
>>
File: pablo honey.jpg (7 KB, 244x207)
7 KB
7 KB JPG
>>107083562
>He's got a point.
A floating point, that is!
>>
>>107084185
this doesn't look like something that would happen with modern triangle/quad rendering and tessellation
>>
Ever heard of normalized coordinates
>>
>>107083562
the real question is is there any point to the new meme learning low precision floating point formats like bfloat16, float8, and bfloat8 outside of AI?
they're supposed to be more accurate or something compared to halves?
>>
>>107086661
>are low precision types supposed to be more accurate
based retard. No, it's just memory usage. AI tards cannot into efficient code, so they have to bloat hardware memory instead.
>>
>>107086705
>reading comprehension
more accurate as compared to the (at least on gpus) very widely supported low precision type, which i mentioned, 16 bit IEEE floats i.e. halves
which happen to be the same size as bfloat16
>>
>>107083562
>loses accuracy at higher numbers
what's the fucking point here exactly? why would you want to break incrementing big numbers?
>>
>>107083961
usecase for 3d?
>>
>>107083562
OP is retarded.
>>
Just a reminder that Chuck Moore made processors (greenarrays) with his own CAD software (okad) entirely with his own forth and he used no float point, only fixed point arithmetic.
>>
>>107083635
I good enoughed your mom

I'm sure you won't mind, enough.
>>
>ctrl + f
>posits
>0 results
>>
>>107083562
fixed point arithmetic doesn't solve the 1/10+2/10 problem btw
>>
File: 1721072985261900.jpg (2.48 MB, 4096x5120)
2.48 MB
2.48 MB JPG
>>107084166
>Computers were not invented for spying on the population
heh
>>
>>107086661
bfloat16 is literally just the upper half of a single precision float, so the only advantage is half the storage space at a cost of 70% of the precision.
It's still accurate to ~2 decimal digits (enough for percentages), and it's trivial to convert back to a single precision float, so it covers a decent number of use cases.

fp8 and fp4 are pretty much just for ML, yeah. They have shit accuracy and range, which only really works well when most of the numbers are normalized and you have literally billions of them to diffuse the error out over.
>>
>>107087645
If you want that, you want either decimal or rational math. They have their downsides.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.