[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.
  • You may highlight syntax and preserve whitespace by using [code] tags.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Janitor application acceptance emails are being sent out. Please remember to check your spam box!


[Advertise on 4chan]


File: alloc.png (167 KB, 923x689)
167 KB
167 KB PNG
Been reading up on MMM lately, and wound up reading Ginger Bill's series on allocators.

So far, in my career, I've seen three patterns for memory management past the C minimum:
1. Leave it to GC
2. Borrow checking (Rust)
3. RAII with tools like smart pointers (C++)

All of these assume that you're going to be slinging arbitrary-sized data onto the heap at more or less random points in program execution and need something to manage those allocations. Ginger Bill alleges this is not true, and that most of the time you really know how much memory you're dealing with. That opens up a new option:
4. Allocator-based management

From what I understand, the principle is to preallocate necessary memory to complete a task, complete it without worrying too much about memory efficiency within the task, and then release all the memory at once.

I expect this would be very efficient for low-level computing like graphics, which Ginger Bill seems to think about a lot. But I'm not convinced that his allocation breakdown makes sense, unless you're counting trivial stack allocations that nobody has trouble managing in the first place. Certainly it doesn't seem to hold in the OOP world of enterprise software, which is what pays the bills for me. Daemons and web servers have constantly shifting allocation sizes according to business need, and most of the garbage I deal with is in his top right corner.

Anyone try programming this way? How did it work in practice?
>>
>>107404326
It's very difficult, but you can do more things.
The guides you are reading sucks.

POINTERS - 10 Hours
C is a typeless language, it does not know what data is what type. That is why printf allows you to *interpret* data how you see fit

1. Write out each program that is inside the following youtube video
https://www.youtube.com/watch?v=MIL2BK02X8A

2. Try to write 1 additional test per program in the video.
Did you learn anything? Did you test it properly?
AND TEST IT.

How many additional tests did you write?

3. Update your notes of understanding then present those notes to me


This is a course you need to spend 10 hours on to get a medium understanding of pointers, it is very hard learning.
>>
People who say pointers are easy don't know what they're talking about.
>>
>>107404326
isn't that basically making simple programs and then just not calling free when they exit?
>>
>>107404326
No. The analysis is fine of a purely statistical foundation, but nobody cares about the top half of that table.
lifetime/size known is just... a variable. The language handles that for you when you write int i = 0. Nobody needs to think about this. It's the "unknowns" that take most of your memory, and are most prone to causing issues. Of those unknowns, "size known" is often misleading. You very very rarely actually know exactly how much space you will need, if you know the size of one "element" there usually is an unknown number of elements.
"Lifetime known" is also misleading. If the lifetime is short, such as one scope, then it's easy peasy to handle and you don't even think about it. If the upper bound lifetime is measured in real time, or as one "scene", or as part of a complex function, then reserving all that memory until you're ABSOLUTELY sure it can't possibly be needed anymore may be too costly. Imagine a program that recursively reads files keeping each file in memory until the entire process finishes, just to avoid having to manage lifetimes.

Overall it sounds like a mathematical approach to memory management that has no bearing on real world applications. Maybe it works for graphics, I don't have much experience in graphics programming, but it's surely not an optimal strategy for most programming
>>
You guys are absolutely wasting your life on this. This is the most premature optimization imaginable. I work as a contractor and I'm brought in to fix shit other people wrote. One round trip to the database dwarfs all your memory allocation concerns for the entire function/method/whatever. I just spent a day dropping a page load from 2000 round trips to the database down to 100. The page is now 10x faster. The client is ecstatic.
At no point in this process did memory allocation make up more than a miniscule fraction of the page load. Not before. Not after. No one cares. Going to the heap is like going to the fridge. Going to the database is going to Jupiter by comparison. You can go to the heap as much as you fucking want and it will never make any goddamn difference at all.
>>
>>107404593
Not everyone is a webshitter.
>>
>>107404617
Did you miss where OP said:
>Certainly it doesn't seem to hold in the OOP world of enterprise software, which is what pays the bills for me
>>
>>107404449
That's roughly what I suspected. When my RPC server gets a request, it can have pretty much any size of data in it, and I need to allocate that dynamically at runtime (but can scope it to die when the request ends, RAII style). When my daemon enqueues a work item, the lifecycle of that item escapes any one scope and needs to be managed by whichever worker scrapes it off the queue. If I make a logic mistake, then I can easily hit a use-after-free or double free or memory leak. I don't think a custom allocator does anything to really help in those situations. I just have to walk through the code, make sure that every allocation path ends in a free one way or another, and use asan or what have you to find the cases where I make mistakes.

I was also reading https://verdagon.dev/blog/when-to-use-memory-safe-part-1 , which asserts that you can achieve memory safety by allocating frequently-used objects into a large static array and then passing references to the array:

> If we use a Ship after we've released it, we'll just dereference a different Ship, which isn't a memory safety problem.

That feels like it's missing the point somewhat. Certainly it doesn't result in an RCE vulnerability, but the result of this is a difficult-to-diagnose bug in the program. The author trivializes the problem by making his example a video game, but arbitrary code execution isn't actually a problem for most video games - Super Mario World has a well-known ACE bug, and the fallout from that is that there's a goofy speedrun category for it. The real problem with a memory bug, even if the fallout from it is bounded, is that now your program has a particularly pernicious bug in it.

This is honestly making me much more sympathetic to the C++ tools-based solution with smart pointers and RAII, even though I've found the language to be a bit of a dumpster fire in a corporate environment.
>>
>>107404593
Are you making the argument for GC here, or just RAII?

Anyway, the point of the thread is less "how can I juice my programs" and "this experienced developer is saying something that doesn't align with my own experience, what have you guys seen?"
>>
Yes, I do think about code I write and how it works.
>>
>>107404326
instead of splitting hairs about "theory", one can just ask for benchmarks that beat rust used with unhardened (or equivalently hardened) mimalloc/snmalloc/tcmalloc/...
details about when and how "drops" happen aside, borrow-checking and RAII are orthogonal btw (rust has both).
>>
>>107404664
Keeping a pool of frequently allocated objects is a great thing to do, but not really for reasons of memory safety. It deduplicates memory, skips unneeded disk reads and lets you centrally manage resources. Especially in a videogame engine it's a good standard, but I completely fail to see how it improves memory safety. You still need to refcount it or otherwise manage its lifetime to avoid leaking, and this brings back all the flaws of smart pointers. It's just more centralized, which makes it easier to debug in some ways and harder in others
>>
>>107404687
I'm making the statement that if you're working in enterprise software, you shouldn't worry about memory allocation at all.
Going to memory is 1000x faster than going to disk and 1000000x faster than going to the DB. The only memory value I care about in enterprise development is how much memory is left.
>>
>>107404326
"Computer Science" exists entirely in that last domain, Size Unknown and Lifetime Unknown.
A Turing Machine has no known lifetime (Halting Problem) and requires Infinite Memory.
All those other cases are Finite State Machines.
If you are going to restrict yourself in such a way, the language should NOT be Turing Complete.
Thus you should be writing everything in HTML.
>>
>>107404724
The guy I linked makes the argument that "memory safety" just means never using a pointer to memory that is not in the expected data format for that pointer type. I see what he's trying to get at, but the implicit argument that the only memory bugs we care about are the most extreme memory safety bugs is like the Rust assertion that memory leaks aren't in the larger family of memory bug (it's hard to stop them, I get it, but you can't just handwave programming challenges away).

>>107404733
>you shouldn't worry about memory allocation at all.
So GC or RAII? Otherwise you do have to allocate and deallocate it, it's just the mechanics of the language.
>>
>>107404326
words words blogspam theory circlejerk

The big chunk allocation is arena allocation. You are right that is commonly used in graphics programming.

If you have dynamic size, you can use size class allocator for a compromise. It's as simple as LD_PRELOAD=/usr/lib/libtcmalloc.so.4 or something . Literally a single line of code improved performance at FB
>>
>>107404790
>So GC or RAII? Otherwise you do have to allocate and deallocate it, it's just the mechanics of the language.
If you have to think about allocation or deallocation at all, it's a bad language for enterprise. It's a language that is wasting your time. Your program is going to use a tiny fraction of memory that your database uses. The memory used is insignificant. The amount of time talking to memory vs database is insignificant. The most important memory battle to fight in enterprise is making sure the pinheads in IT who bitch about your database server pony up. Those idiots will try to cut you database hardware down to the point it reads off the fucking disk instead of from memory.
>>
>>107404854
Yeah, that pretty much tracks with my experience. The only performance costs I've ever had to reckon with were database costs.

What I've found to be the bigger issue, practically speaking, is correct code architecture and design. I see garbage abstractions every day, and each one of them is a nightmare to extend or build on top of or even to deprecate. So if I were choosing language, it would mostly be on the basis of which one makes it hardest to seriously fuck your abstractions up, which to be perfectly frank doesn't favor any language particularly. You can always build a better idiot. I've seen trash written in C++, C#, Go, Python, Typescript, Javascript, pretty much everything. You just can't stop 'em.

Still, it's nice to explore outside of my existing expertise.
>>
>>107404326
With the "linear allocator that releases all memory at once" I always run into issues:

- Overlapping lifetimes
Imagine you want to read a png file, decode the bytes, then free the original raw png bytes, but keep the decoded bytes. Then you can't free the raw png bytes. You have two options here.
Either just don't free, but that's like saying, I don't do memory management at all here. Could be fine if you have enough memory, but that just sidesteps the problem.
Or you have to pass in two allocators, e.g. one for temporary allocation, one for permanent ones you want to keep. That's only a little annoying in this scenario, and it's pretty clear what's temporary and what's permanent, but I always run into situations where I suddenly have to route a second arena down a bunch of calls or create a new one ad-hoc just to get some memory, and it gets messy quickly.

- Growing data structures
Dynamic arrays are just so useful, but if you only have a linear allocator, you can only allocate at the top, you can only dynamically grow something if it happens to be the very last thing you allocated which is never the case in non-trivial situations. In the general case, you're just leaving the old memory behind. With a growth factor of 2, you'd end up needing twice as much memory for your array because of that fragmentation.
That's fine again if you have enough memory, but my point is that arena's just suck for this kind of allocation pattern. Regular allocators handle it fine.
And if you're starting to think about adding a list of free blocks to your linear arena that can hold the blocks that some other data structure grew out of, then you're essentially writing a bad general purpose allocator ad-hoc in your arena. Might as well use a generic malloc then.

Sometimes there's clear scenarios where you're just allocating a bunch of stuff linearly, and then linear allocators are obviously great. But trying to force them into all other cases seems like a bad idea.
>>
>>107404383
Or the average programmer isn't as bright as we'd like to think.
>>
>>107404326
You are basically talking about arena and they still don't protect you from out of bound access.
As soon as you deal with pointers you take the risk of accessing invalid memory regions.

You could avoid it by always having a check that the address is valid but this is inneficient.

Anyway, the issue has never been allocation itself but memory access patterns.
>>
>>107404383
They are easy if you are not a jeet.
It's literally just a memory region containing the address of another memory region, what's so hard to grasp about it...
>>
>>107404326
I use specialized allocators a lot, especially "alternate" stacks, helps bypass shit like having no dynamically sized arrays in the language while still having the performance of the stack.
>>
>>107404326
Borrow checker is not a form of memory management. Rust uses same memory management model as modern C++(RAII).
Borrow checker is a static analyzer. You can attach similar tools to your C++ codebases too.
>>
>>107406217
>You can attach similar tools to your C++ codebases too
those exist? link?
>>
>>107404326
Just use memory arenas
>>
>>107404326
Allocator-based memory management starts making sense when you realize that most of the time you have a plethora of different objects but they are part of the same system or program logic phase, so their lifetimes are bounded by a shared context. If you think about managing the memory in terms of individual objects, you have to either restrict or keep track of what's referencing what. On the other hand, if you know they're all going to be dead by a certain point, no matter how they reference each other, you can allocate and deallocate them collectively and never think about it.
>>
>>107404903
>Imagine you want to read a png file, decode the bytes, then free the original raw png bytes, but keep the decoded bytes.
Great. You now know how to handle any allocations you make internally in the process, without having to worry about individual deallocations. Deallocating the final result is obviously someone else's problem.

>Growing data structures
Use pools.
>>
>>107404401
No.
>>
>>107404449
>it doesn't work well for my unspecified and contrived scenario

>>107404664
>it doesn't work well for my use case

Ignoring the distinct possibility that you have a shit design, what's the point you're trying to make? That the strategy is not applicable 100% of the time? Ok. But it is applicable in a lot of situations where people opt for smart pointers or GC, or even more asinine slop like borrow-checking, which is the worst possible solution as it forces you to program as if every allocation is the worst-case scenario.
>>
>>107404593
True, you're bottlenecked by that meme piece of shit AS400 behind 7 proxies which stores the account balance.
>>
>>107406570
>it is applicable in a lot of situations where people opt for smart pointers or GC, or even more asinine slop like borrow-checking, which is the worst possible solution as it forces you to program as if every allocation is the worst-case scenario.
Ironically it also forces people to work around the borrow checker by doing things like using pools and handles for complex data structures and mutual references, embracing the same basic principle as using allocators.
>>
>>107404593
Everyone tells me optimizations are not worth it and yet when I implement them there is perceptible gains before I even run benchmarks.
Talk about wasting life, I execute programs way more than I write them. Look at suboptimal software and consider how much time you've spent waiting for just process init alone. Time spent on nothing. Over and over again, every day.
>>
>>107406230
Something like Coverity or IKOS might be useful. Static analysis for C++ exists but they are not going to be as reliable and/or expressible as rust borrow checker.
>>
>>107406751
>expressible as rust borrow checker
The Rust borrow checker doesn't allow you to express most correct programs, which is why static analysis tools for serious programming languages like C++ can't be as "reliable" as a "programming" language that simply refusing to compile perfectly good programs.
>>
>>107406791
>The Rust borrow checker doesn't allow you to express most correct programs
Neither do any other static analyzers. So what?
>>
>>107404383
> no pointer
> (you): can i have another beer?
> me: sure bruh, i have it right here. *hands you a beer*

> pointer
> (you): can i have another beer?
> me: sure bruh, just grab one from the fridge. *points at fridge*
>>
>>107406905
>So what?
So, retard, you can't expect a static analyzer for a real programming language to be as reliable as the borrow checker for your toy one.
>>
>>107404593
Shove your premature optimization up your ass, half the industry is about remaking the same shit (not even reinventing because changing it isn't guaranteed to be profitable) and then "maintaining it" (failing to clean up your mess) as if hardware changed meaningfully for the past decade now.
>>
>>107404326
>Ginger Bill alleges this is not true, and that most of the time you really know how much memory you're dealing with. That opens up a new option:
This is wrong. After over a decade of programming, nearly all my programs use dynamic amount of memory that depends on input data, configuration, number of users/connections, etc.
>That opens up a new option:
>4. Allocator-based management
All the 3 forms of memory management you mentioned, including C, use allocators. What do you think malloc does?

>From what I understand, the principle is to preallocate necessary memory to complete a task, complete it without worrying too much about memory efficiency within the task, and then release all the memory at once.
>Anyone try programming this way? How did it work in practice?
This has very narrow usecases. So far I only found memory areas useful in two kinds of programs. Raytracers which allocate a lot of temporary data that can be all released after frame is rendered and some games that have hard limit on amount of enemies or projectiles(object pools). In both cases, these areas are ofc allocated by dynamic memory allocation mechanisms.

>I expect this would be very efficient for low-level computing
In low level computing, I just allocate heap and few statically known persistent allocations on my stack. With maybe some additional heap external memory(eg PSRAM) with independent (dynamic) allocator and that's it. I just write code that rarely allocates things and allocations are not that slow anyway.
>>
>>107406943
>So, retard, you can't expect a static analyzer for a real programming language to be as reliable as the borrow checker
That's literally what I said. Thanks for paraphrasing my post I guess.
>>
>>107407003
>That's literally what I said
No, what you said is that the borrow checker is more "reliable/expressive" which is not the case. The reliability here comes at the cost of not being able to express most normal programs.
>>
>>107407015
>The reliability here comes at the cost of not being able to express most normal programs.
Like you have noticed. Static analyzers for other languages are even less expressible than borrow checker. So yeah, borrow checker is more reliable and expressible than C++ with an external static analyzer.
This mostly comes down to the fact that Rust, from syntax to it's stdlib, has been designed with this sort of static analysis in mind.
>>
>>107404593
Well said. Midwits obsess over memory areas and explicit allocations, but I'm yet to see any of them post actual benchmarks and how their strategy translates to actual performance gains. It's like they keep obsessing over the right way to hold a hammer without ever hamming down a single nail.
>>
Another nocoder thread.
>>
>>107406791
what is "most correct programs"?
>>
>>107407103
Cloudflare outage was caused by a correct program btw.
>>
>>107407116
The behavior of their config generator and orchestrator was not correct.
>>
>>107407127
So? The Rust program was correct.
>>
>>107407149
Yeah.
>>
>>107404326
Borrow chexking is orthogonal to memory management
>>
>>107407032
>borrow checker is more reliable and expressible
I don't know what "expressible" is supposed to mean here, jeetoid. If you meant "more expressive", then you're obviously wrong and you've already conceded this. External static analysis is "unreliable" only to the extent that it allows you to assert things about your own code that algorithms can't deduce. This is expressiveness. Writing C++ and then using external tools allows you to express your programs and their invariants much better than Rust with its braindamaged borrow checker. This is not up for debate and I'm not gonna read anything more written by your brown hands.

>>107407103
>what is "most correct programs"?
Programs that provably work as intended without any memory errors, where "provably" refers to the general practice of proving rather than the niche practice of bending over for the borrow checker toy.
>>
>>107407234
>Borrow chexking is orthogonal to memory management
It could be there's a certain LGBTQ+ programming language that conflates them.
>>
>>107404326
>Anyone try programming this way? How did it work in practice?
wtf?
arena allocators are a classical programming pattern in c

>enter context : create arena
>leave context : destroy arena

i use them constantly, everywhere.
nothing arcane.
you allocate a chunk and distribute pointers to it
if you run out of space, you allocate a new chunk
its a code organization measure as much as it is a performance enhancing measure because you cut down on actual allocations.
you allocate once per 1000? 100000 items?
your allocation time basically dissolves into nothing
>>
>>107407335
>I don't know what "expressible" is supposed to mean here
Let's define it as a range of programs and algorithms it can correctly validate.

>Writing C++ and then using external tools allows you to express your programs and their invariants much better than Rust
That's wrong. Rust has been designed with such static analysis in mind. C++ static analyzers that eg attempt to track lifetimes generally do not allow you to express more complex lifetime bounds like Rust can. It all also fails short when you call external code.
In general it's not as simple as writing C++ code and then running it through static analyzer. You often need to provide it with extra information via comments and such to allow it to prove correctness. And this is generally much more limiting in terms of what can be expressed in it than Rust syntax.
>>
>>107407463
>Let's define it as a range of programs and algorithms it can correctly validate.
In that case you're trivially wrong and there's no further discussion to be head. Feel free to read and re-read my previous posts until your 80 IQ jeetoid mind starts to grasp the point.
>>
>>107407480
Not an argument
>>
>>107407485
Your delusional mental illness and context window of 1 post aren't arguments, either, troon.
>>
>>107407491
Midwit is getting angry, heh.
>>
File: seething_calmly.jpg (39 KB, 460x663)
39 KB
39 KB JPG
>Midwit is getting angry, heh.
Imagine being so trooned-out that you start believing crippling intrinsic limits on what programs you toy language can express (to the point where you can't even implement basic data structures) make it more expressive.
>>
>>107407335
>Programs that provably work as intended without any memory errors, where "provably" refers to the general practice of proving rather than the niche practice of bending over for the borrow checker toy.
that doesn't answer the question.
what is "most correct programs"?
most means >50%.
provide sources with specifics.
>>
>>107407600
>asks a question
>gets a concrete answer to the question
>that doesn't answer the question
Another trooned out mental patient. I enjoy these discussions because they bring out the worst of you out of the woods to demonstrate what kind of "people" use this crippled memelang.
>>
>>107407611
>what is "most correct programs"?
>most means >50%.
>"""provide sources with specifics."""
the floor is yours.
>>
>>107407623
See >>107407611
>>
>>107407647
i accept your concession
>>
>>107407661
I note your psychotic illness. "Provide sources with specifics" is not the question you asked.
>>
>>107407611
>I enjoy these discussions because they bring out the worst of you
Nobody tell him
>>
>>107407678
That particular "no u" doesn't really work in a context where it's your cult vs. literally the rest of the world, estrogen-addled cripple.
>>
>>107404326
it used to be that you had severe memory limits anyway, and you basically just used all of it, so you had to use it wisely.
>>
File: safe knife.png (761 KB, 952x635)
761 KB
761 KB PNG
Recent cloudflare incident has proven that Rust is nothing more than pic related, you can stop pushing this shitlang now.
>>
>>107405996
Nope, sadly you are black :( and all of this will go over your head
https://www.ralfj.de/blog/2018/07/24/pointers-and-bytes.html
https://www.ralfj.de/blog/2020/12/14/provenance.html
https://www.ralfj.de/blog/2022/04/11/provenance-exposed.html
>>
>>107407743
>you can't use it to stab other people but you can use it to cut your wrists
Appropriate choice for that target audience.
>>
File: kernighan-on-dijkstra.jpg (86 KB, 850x400)
86 KB
86 KB JPG
>>107407763
>mfw when autists
austists should get regularly beaten up

>inb4 thats not an argument
fine observation, timmy
a pointer is a reference. anything else is irrelevant mental retardation
>>
>>107407771
This is a "safe knife" in UK, because you know, the subhumans who attack people with machetes totally use them to stab and not hack lol.
>>
>>107407763
A pointer is literally just an integer.
>>
>>107407790
I don't know about machetes but you're not "hacking" anyone to pieces with that joke of a thing you posted.
>>
>>107407816
Because I'm not brown.
>>
>>107407823
Could've fooled me.
>>
>>107407811
Not if you have a retarded optimizing compiler. Provenance is (unfortunately) extremely important in these cases and drastically transforms the meaning of a pointer in a program, even if the bit representation is unaffected.
>>
>>107407840
No it doesn't, take your meds.
>>
>>107407811
no.
disassemble this code
#include <stdio.h>
#include <stdlib.h>
#include <string.h>

struct s_str_a
{
size_t size;
char *text;
};

struct s_str_b
{
size_t size;
char text[];
};


int main(void)
{
char text[] = "this is a string";
struct s_str_a *str_a = malloc(sizeof(struct s_str_a));
struct s_str_b *str_b = malloc(sizeof(struct s_str_b) + sizeof(text));

str_a->text = text;
strcpy(str_b->text, text);
//here str_a->text is a number
puts(str_a->text);
//here str_b->text is an offset
puts(str_b->text);

free(str_a);
free(str_b);


}
>>
>>107407843
Cease.
>>
>>107407846
>C
>malloc
yea I'm not letting my compiler see this slop
>>
>>107407873
>retard changes his argument to always win
>>
>>107407873
youre a mental retard, anon
you should shut the fuck up about c forever more
>>
>>107407881
>>107407887
Write this with a proper allocator.
>>
>>107407881
well, all he won is the privilege of getting insulted, kek
>>
>>107407901
why?
because youre a useless faggot?
whats in it for me?
do you at least offer oral sexual gratification?
>>
>>107407912
So I can see the assembly I'm meant to see and not 5000 lines of glibc bloat.
>>
>>107407743
SO FAT YOU LOOK
AND SEE FOOD
>>
>>107407918
>call malloc
wow, that was fucking traumatizing
now shove the code into godbolt nocodeshitter retard
>>
>>107407811
No. Pointer and address are not the same thing.
>>
cnile discord found this thread
>>
>>107407944
Godbolt doesn't work on Male Poon so I don't think I will.
>>
>>107407963
>t. seething, preemptively
good. youre learning, adjusting to the new normal

soon you wont even need to get btfod
you will think about posting on /g/
and immediately get hysteria followed by seizures
>>
>>107407966
>hipster trash
nothing of value was lost
stay filtered, faggot
>>
>>107408003
Ok, now write a proper example.
>>
>>107408021
i have a better proposition:
get fucked hipster faggot lamao
>>
>>107408032
I accept your concession about a pointer being a pointer (one register on all relevant CPUs).
>>
>>107404383
The concept of pointers is simple.
The pointer syntax is what's fucked(i'm talking c).
>>
File: boomer-pepe.jpg (63 KB, 637x651)
63 KB
63 KB JPG
>>107408051
stay filtered faggot
lamao
>>
>>107404449
>Maybe it works for graphics, I don't have much experience in graphics programming, but it's surely not an optimal strategy for most programming
I deal with Linux kernel programming at my $dayjob and drivers can often be an area where you know exactly how much memory you need, allocate it when initialising the device and then free it when the device disappears. You might allocate a struct describing the device, a few buffers for DMA or data bouncing. On the other hand, while you often deal with fixed sized chunks of data, the consumer of your driver often has to deal with storing an unknown amount of those chunks.
>>
>>107404326
2 and 3 are the same lmao
>>
>>107407374
It does not
>>
>>107407442
>arena allocators are undefined behavior in c
FTFY
>>
>>107404593
>>107407056
I work for a fintech company. It is absolutely a requirement to consider data models, their memory footprint, and their lifetime. You can’t manage 100’s of thousands of portfolios in a meaningful way without throwing more memory and parallel processes (in corporate land, that means more AWS ECS/EC2 boxes). You just don’t work in an industry where it matters
>>
>>107413121
>fintech
Isn't real. You have no right to lecture anyone. Everything he said is right for real world software.
>>
>>107404593
>One round trip to the database dwarfs all your memory allocation concerns for the entire function/method/whatever.

It depends what kind of code you're writing.
To start with, the guys writing your database have to care about memory allocation.
So are people writing:
- any sort of real time audio/graphics code (video games, audio software, codecs, etc.)
- high-frequency trading software
- operating systems
- web browsers
- etc.

I could extend your logic and say the database does not matter since the product being ordered by your business system will have to be physically loaded into a truck, shipped across country, etc. and the time to do that dwarfs the database access time.
>>
>>107404593
You drink Google's cum for a living so I can understand why you're such a retarded faggot.
>>
>>107413882
> accounting and high frequency trading platforms aren’t real
I’m sure you know “real world software” while staying indoors unemployed. Everything he said is “right” for the fags that don’t have to care about performance and just throw more memory at the problem. IO vs CPU bound operations. I’m sure you’re one to complain that all the bloatware in today’s world is awful, yet you also are arguing “you don’t need to care about any memory optimization, it’s all IO”.
>>
>>107404326
It's valid to pre-plan your exact memory usage and pre-allocate everything. Think of an 8-bit console game. They had specific memory addresses for things like score, health. If you had game "objects" appearing and disappearing, such as an enemy sprite appearing on screen, they had a set number of memory slots for the object data, they designed the game so that there was a maximum limit to the number of objects could appear on screen at any one time.

In modern software, it's still viable. If you are reading user data, pre-allocate the buffer size. Use mmap to map files. The convenience of allocating memory when needed isn't necessary. If you have a complicated program that could do a variety of things (you don't know the memory requirements until it starts doing things), divide the program into modules so that each module knows its memory footprint and allocate and free memory per module as needed. Reducing ad hoc memory allocation is a reasonable style objective. If you can't do this there is a good chance your program has scope bloat.
>>
>>107404326
These numbers seem kind of retarded, isn't GingerBill a game developer? The lifetime of most memory allocations in a game are not known, you can't know how many bullets the player is going to be firing from his gun or how many enemies the level designer placed on the level
>>
>>107404733
t. 3 hundred thousand memory allocations from typing a letter in the address bar
>>
>>107414741
no, ginger bill writes visual effects software used to simulate things like water and smoke used for CGI

regardless, there is all kind of memory being thrown around in a game. an arena that gets cleared every frame is extremely common for scratch memory. lots of older games just preallocate what they know they can operate with without running out in the same way your program has a stack limit but doesn’t overflow anyway. you also don’t just use bump allocators for literally everything anyway, you have to give up certain luxuries and complicate your implementation as soon as your lifetimes no longer match or you need to start growing memory. you get one freebee contiguous memory block from the OS that you can just grow forever thanks to virtual memory but that’s it
>>
>>107414741
>you can't know how many bullets the player is going to be firing from his gun or how many enemies the level designer placed on the level
you kind of do. the former cannot exceed the firerate of the gun over the maximum lifetime of the bullet and the latter is known statistically
>>
void *mem = sbrk(4096);

just keep growing the data segment every time you need memory
>>
>>107414922
Allocation memory pools are still "unknown lifetime"

>>107414997
You either do or you don't
>>
a lot of this thread looks like the same guy replying to his own post
>>
>>107415137
Are you that guy?
>>
>>107415058
if you know it is physically impossible to exceed an n sized bump allocator so you never have to grow it and the size is still reasonable, is it known or unknown? it would depend on your definition of unknown
>>
>>107415267
it's unknown
>>
>>107415276
unknown implies you need some implementation complexity in the form of a mechanism to grow the structure or reallocate it entirely though. there is no additional cost or complexity in this case
>>
>>107415301
I suppose so yeah
That's still not going to be the case for anything in a game though
>>
>>107404367
>C is a typeless language, it does not know what data is what type. That is why printf allows you to *interpret* data how you see fit

This is so wrong. C has a type system where everything has a type.
>>
>>107415321
>That's still not going to be the case for anything in a game though
uh, the examples you gave were specifically for a game
>>
>>107415137
a lot of comments are from a regular nocoder retard who thinks he wins arguments by having the last word. some of us just have some fun with him from time to time.
>>
>>107415137
https://en.wikipedia.org/wiki/Dead_Internet_theory
>>
>>107415276
A pedantic fag, I see. If you never exceed it through profiling, you have a KNOWN bound and can pre-allocate that size contiguous memory chunk to that module. Fucking anons on this board larping as programmers working on large scale systems
>>
>>107415331
and you can't do them with a bump allocator

>>107415414
yes but you can't preallocate all the memory you need for a game level unless it's a game where new objects aren't created
>>
>>107415335
are you the guy who goes on about cniles?
>>
>>107415434
yes you can, arguing otherwise is like saying you can't call more than n functions in any program because the stack size is fixed. it doesn't work that way. every game in existence has a working memory amount it needs to function that it will not grow beyond unless there is a leak. allocating a single block for everything you need for a game to run one time at the beginning of the program is totally possible, and not just for shit you'd only see on an atari 2600 or an arcade cabinet from the 80s
>>
>>107415476
>every game in existence has a working memory amount it needs to function that it will not grow beyond
Ok so what if I fire my gun a million times during a level are you gonna preallocate a million bullets?
>>
>>107415434
I know I’ll never have more than 100 objects on screen. I can allocate 100 objects beforehand. When the module “needs” an object, a naive approach would be to get an index to the next object in the contiguous memory chunk, fill it with the initial data. Upon deallocation, you just clear the data back to some baseline, ready for use. Never needing to request memory on the heap, it’s already there. You’re just placing temporary data while the object is in use. Fuck, it’s not hard you dipshit. I bet you love LINQ where you always call .toList() losing all semblance of “benefit” to LINQ where you gain an iterator and instead forcing the entire container to be constructed before moving forward.
>>
>>107415434
You can just do things.
https://github.com/ioquake/ioq3/blob/main/code/game/g_main.c#L38
>>
>>107415497
a million bullets don't all exist concurrently. we've been over this. if it's hitscan, you wouldn't even be allocating any memory to begin with, you'd only be doing a trace. if they are a physical object, you set constraints like a maximum lifetime after which they naturally destory when they collide with nothing. now with that maximum constraint, you know how many bullets can possibly exist given every object that can create them and the speed at which they can be created and with that you can make a bump allocator that cannot overflow using some basic napkin math. you can slim it down with more constraints, like never having more than a few dozen or so enemies active at the same time.
>>
>>107415497
Most games don’t have physical bullets. They’re hitscan ray collisions
>>
>>107415448
if he is he should post more, the knife licker pictures were funny
>>
>>107415523
>>107415526
That's called a pool allocator, you are dynamically allocating memory, the lifetime on that memory is not known at compile time
>>
>>107415531
You need a pool allocator to do that, not a bump allocator, and like I said that's still dynamic memory management
>>
>>107415543
I'm done with this conversation. You clearly don't understand what we're talking about or you're just being a fag. Either way, this is pointless. Enjoy unemployment
>>
>>107407600
For each distinct borrow checking Rust program you can add a provably correct but not borrow checking fragment to it, that's a bijection from set of borrow checking Rust programs to a subset of provably correct but not borrow checking programs
>>
>>107415572
You don't know what you're talking about. What you described makes sense, but it's a dynamic memory allocation scheme that you use when you don't know what the lifetime of an object is. We're talking about the OP where GingerBill says that only 1% of things have unknown lifetimes, which is bullsht in this case
>>
>>107406941
/thread
>>
>>107415555
what's your point?
>>
>>107415582
> gun shoots at 10 bullets per minute
> gun can only hold 30 bullets per magazine
> magazine changes take 5 seconds
> bullets travel at 30 m/s
> map never exceeds 1000 m cubed
Do the fucking math, you absolute retard. Tell me the bullets don't have a known total ever to exist at one moment and a maximum they could live
>>
>>107415590
My point is that the claim that 1% of objects have an unknown lifetime is bullshit
>>
>>107415596
but it is known
>>
>>107415595
>ell me the bullets don't have a known total ever to exist
They do, but that's not what lifetime means in this context
>>
>>107415607
No it's not known. You don't know when the player is going to fire their gun and you need to allocate a new bullet
>>
>>107415620
good fucking god
>>
>>107415573
can you demonstrate this with fizzbuzz?
>>
>>107415626
I don't think you understand what lifetime means here
>>
>>107415634
>dude we just don't know
we do, we do

is a bullet going to exist for an hour or is it going to exist for less than 3 seconds. we get to make the constraints and set the rules. again, it's just like the stack. fixed size even though you don't actually technically know how many functions you're calling. there's nothing dynamic about this
>>
>>107410845
basically. and then this means managing pointers on pointers
>>
>>107415448
no. nocoders have nothing to do with C, or any other language, by definition.
so, i don't subscribe to the Cnile moniker, or any such monikers given to the gantry tards imagining themselves as a part of "the sport" by using some jargon and regurgitating some talking points.
i've written a decent amount of C in my life anyway, including adding a small feature to a popular opensource tool that cut a release with that change recently.
oh, and ageism is not cool regardless.
>>
>>107415670
lifetime does not literally mean how long it is alive
lifetime means you know at what point in your code it gets allocated and at what point it gets deallocated
Because it depends on when the player clicks fire, you don't know that
You can create a pool and allocate bullets from there like you said, but you're still doing dynamic memory management for an object of indetermiate lifetime, all you know is that there will never be more than X so the pool can have a bounded size
>>
>>107414719
only sane answer ITT
>>
>>107404326
>1. Leave it to GC
good enough for me and enterprise
>>
>>107415697
ok, but we know the lifetime of the allocator though and statically know its maximum size to the point that we can allocate it at the beginning of the program so what difference does it make? nothing about any of this is actually dynamic other than the very first allocation we do, if that

everything in this example is known. it's not like a problem like a compiler where we actually have no clue how much memory we will need. this is a very constrained problem.
>>
>>107415760
>nothing about any of this is actually dynamic
a memory pool is a dynamic memory allocator. You've just created your own dynamic memory allocator instead of using malloc
>>
>>107415775
>this thing that is statically known and will never need to grow in size is actually dynamic
uh, no?
>>
>>107415792
when you take memory from the memory pool, you are dynamically allocating memory. Dynamically allocating memory does not mean "calling malloc"
>>
>>107415628
Say you have some random Rust program consists of a borrow checking implementation of fizzbuzz
fn fizzbuzz(n: i64) {
...
}

You can add some provably correct but not borrow checking block to the start of it:
fn fizzbuzz(n: i64) {
{
let mut arr = [0; 2];
let a = &mut arr[0];
let b = &mut arr[1];
*b = *a
}
...
}

As there's uncountably many such blocks, the set of correct but not borrow checking Rust programs has a higher cardinality than the set of borrow checking Rust programs
>>
>>107415714
>g-ood e--
>nough fo-r me--
>and ente-rprise
>>
>>107415839
>there's uncountably many such blocks
lol retard
>>
>>107415839
how is mutable access to array elements "correct" or relevant to the implementation?
>>
>>107415839
>doesn't even know fizzbuzz
plz be bait
plz be bait
plz be bait
>>
>>107415792
yes...you aren't using the stack, thus it's dynamic memory (i.e. heap)
>>
>>107415839
this is a bot
>>
>>107415839
>As there's uncountably many such blocks
wrong. proof: all computer programs can be represented as integers, the integers are countable, such programs are a subset of computer programs, qed.
>>
>>107415999
you can store it in bss segment if you want, it ain't dynamic
>>
>>107415414
>>107415301
Cloudflare thought it had a known size for their features
>>107415792
>>107415775
>>107415999
>>107416023
Let me explain anon's point. A memory pool with a fixed size is a dynamic allocator over a (possibly) static resource. Just like the .rodata or .bss .text etc. are a "static" allocation over the lifetime of a dynamically executed program. Or the page table is a dynamic allocation over the static adressable memory. Which is a static allocation over the dynamic hardware you can take out and replace with smaller or bigger one.
Static and dynamic is dependent on the perspective.
>>
>>107415839
You can write that block borrow-checkably
>>
>>107416009
>all computer programs can be represented as integers
False. If this were true, you have solved the halting problem.
>>
Well, this thread turned out to be extremely retarded.
>>
>>107417126
All computer programs are data written in the drive at the end. You absolutely can represent that as a single big integer. It doesn't carry any meaning for the operation of the program, I don't see how the halting problem applies here
I'm not sure if that's what the original anon meant but it makes sense
>>
>>107417204
Nah, there are some pretty good replies here.

Different perspectives on "how much can you plan your memory usage over known size constraints and lifetimes" and "how practical is it to have your primary defense against a use-after-free to be constraining your memory reservations to some allocator, probably an arena, where you free everything when you leave that part of the code?"

They basically shake out to:
> You should do this. You have to think about your code differently, but it's a good thing.
> It's a technique useful for certain applications but not generally applicable.
> It's dumb for enterprise. Most of your performance concerns are about shitty architecture around things like the DB. You should be using a GC language in the first place.

Pretty much what I wanted out of the thread. Disagreement, but intelligent perspectives. Obviously there was going to be retarded arguments as well. This is /g/.
>>
>>107404326
>request a lot of memory
>do everything you want in that memory
>release it all at once
i think this idea has been explored before
I've known them as arenas
https://en.wikipedia.org/wiki/Region-based_memory_management

I haven't done any serious programming but I imagine such an approach would be bad if your average memory cost was much lower than your worst-case memory cost. So perhaps if you were parsing a context-free grammar and needed to maintain a stack of non-terminals whose length should be say log n but in the worst case could be n.
>>
Maybe modern OS's have fancy memory prediction algorithms to predict whether some swap space needs to be used. Perhaps allocating massive chunks aperiodically would mess up that predictions.
There's also another type of memory management which is to do no frees and have the OS clean up after you termintate. Only good for short programs
>>
>>107415036
This is so inane, why not just mmap about 1TiB of memory in one syscall?
Page faults are cheaper than syscalls.
You still need a free-list or whatever is your memory reuse strategy.
>>
bump. indian shill posting hours
>>
>>107419961
Why don't you post something of value to show us that you aren't indian or a shill then?
Plenty of points ITT that were simply ignored, including my own.
>>
File: 1995828204893094141_1.jpg (171 KB, 1536x2048)
171 KB
171 KB JPG
>>107420025
>Plenty of points ITT that were simply ignored, including my own.
I thought about posting something, but then I realized I didn't bother to read anything already written to the thread, my thoughts have probably been stated and ignored already.
>>
>>107417208
>You absolutely can represent that as a single big integer.
>I don't see how the halting problem applies here
That's because your computer is not Turing Complete.
A Turing Machine requires infinite memory.
No, you cannot represent that as an integer, see: Cantor's diagonal argument.
>>
>>107420025
the thread started with some incoherent sentences, a tard name drop, and some basic misunderstandings (e.g. borrow check and RAII).
the thread ended here >>107404719 with no real world examples provided to refute it.
i didn't bother actually reading much after that, as i expected the thread to either die, or fall into peak retarded /g/eet action. judging by >>107415839, it was the latter.
>>
>>107417708
I think Haskell's runtime does this.
>>
>>107420342
I highly doubt that since generalist cucks often do the cope of allocating small chunks, like few MiB at a time.
>>
>>107420311
Anon, pick an exe on your disk right now. Any exe.
What's its file size? I'm sure it's not infinite
If you take the bytes of that exe, and express them as a bigint, that gives you an integer
This has nothing to do with how the program runs or if it halts, if the program exists then it can be represented as an integer
>>
>>107421241
The halting problem is in fact, solvable for any finite machine.
>>
>>107420311
>A Turing Machine requires infinite memory.
Running an arbitrary Turing machine might, but the description of a Turing machine does not. Are you stupid?
>>
>>107422359
The program a Turing machine runs, itself exists on the tape.
Thus yes, an arbitrary program requires infinite memory.
Does that clarify?
>>
>>107420311
the tape needs unbounded memory, but at any given point in the programs execution it is finite and hence can be represented as an integer. the program requires arbitrary amounts of memory but by definition only a finite amount (ie a program that requires infinite memory to represent is not a turing machine).

>>107423801
the program itself is uniquely identified by a description of it's transitions. The program execution state is that plus a particular state of the tape. The former is finite, the latter may be infinite, but both are countable.
>>
>>107425369
The size of the initial state on the tape is countably infinite, yes.
However, the superset of initial states is UNCOUNTABLE, which is why the Halting Problem is unsolvable.
The initial state corresponds to a REAL NUMBER with finite or countably infinite number of digits.
The superset corresponds to the set of REAL NUMBERS.
>>
>>107415861
kek
>>
>>107425369
>>107426684
The description of transitions is finite, but that is not a "computer program on your machine represented as an integer"
What we typically call a "computer program" is a combination of both the finite intrinsics (description of the transition's) AND higher level abstractions encoded on the tape itself.
The fact that some of the program is encoded, and potentially self-modified, on the tape, means that no, all computer programs CANNOT be represented as a finite integer, but rather are to be represented as a REAL NUMBER.
>>
>>107426987
Any program can be represented by a Turing machine, and any Turing machine has a finite description.
There are a (countably) infinite number of integers, but any given one can be represented with a finite binary sequence. Likewise there are a countably infinite number of computer programs but each one has a finite description.
>>
>>107427642
Any program can be represented by a Turing machine INCLUDING the tape, which is infinite.
The state of the tape is represented by an infinite binary sequence, which corresponds to a Real Number.
>>
>>107427853
The program itself is the Turing machine. The input--the starting state of the tape--is separate. Thus programs are countable, but inputs are not, if you allow them to have infinite length--but how would an input of infinite length be useful? If the program halts it obviously cannot have visited every cell.
>>
>>107427950
>The program itself is the Turing machine. The input--the starting state of the tape--is separate. Thus programs are countable, but inputs are not
This is a meaningless distinction, especially in the Von Neumann architecture.
There is no distinction between "program" and "data" - in fact that's part of the very proof of the Halting Problem being unsolvable, using the program itself as input data.
>length--but how would an input of infinite length be useful?
Passing pi as an input to a computation happens all the time.
>>
>>107428545
Suppose you want to write a program to make a simple calculation using pi. Say, computing the area of a circle. You cannot, in the real world, pass all the digits of pi as input. Nor can you use an output of infinite length--you couldn't get one anyway, since the Turing machine could not halt if it has to write an infinite amount of data. But pi is computable (as is any infinite piece data that you could want to operate on), so you can give the Turing machine a certain precision for the answer (number of digits after the decimal, say), and it can
1. compute how many digits of pi it needs for the answer to be within that precision
2. compute those digits of pi
3. use that approximate value in the formula for area

You can always get an answer as precise as you want with a finite input.
>>
>>107428905
>But pi is computable
No it is not, the Turing Machine would never halt in computing pi
Notice the tradeoff between time and space.
>as is any infinite piece data that you could want to operate on
Incorrect. Suppose instead you want infinite random bits (not pseudorandom) as input, for the purposes of having a bottomless sieve of entropy, not computable at all.
>use that approximate value
That is not pi.
>>
File: 1764430607209755.jpg (358 KB, 1156x1156)
358 KB
358 KB JPG
>Justifying memory waste so blatantly
>>107429153
Scientists use less than 50 digits for literally anything ever you disingenuous faggot.
Nobody cares about theoretical cope since math isn't science.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.