[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/g/ - Technology


Thread archived.
You cannot reply anymore.


[Advertise on 4chan]


File: teh.png (16 KB, 819x397)
16 KB
16 KB PNG
I am surprised that distinguished programming projects have not yet embraced a proper understanding of asynchronous execution. Threading remains the dominant model for parallelism, yet it is not inherently superior to a process‑based approach.

I have drafted a schematic illustrating how asynchrony should operate at low levels. My runtime requires a single controlling loop rather than multiple small loops and locks, which tend to transform modules into incomprehensible bloatware. I will post an additional diagram to clarify this concept.

This message is addressed to high iq masterminds. Retards allowed to pass by;

-- PS

Introducing a full‑blown “event system” is unnecessary; instead, provide a “check” API that can be polled. The same mechanism should be used both to verify that a result is ready and to retrieve that result.

Specify optimal polling intervals so that callers do not waste CPU cycles.Implement guards against busy‑waiting, such as returning immediately when no data is available or throttling the call rate.

By offering a lightweight, poll‑based interface with built‑in safeguards, the runtime can remain efficient without the complexity of an event‑driven architecture.
>>
File: equal.jpg (75 KB, 970x924)
75 KB
75 KB JPG
current implementations; quote from author

> It has also made me turn around somewhat and I’ve reconsidered that perhaps using a threaded native resolver is the better way for libcurl to do asynchronous name resolves. That way we don’t need any half-baked implementations of the resolver. Of course it comes at the price of a new thread for each name resolve..
>>
>>108439807
no one does async programming because most programs aren't i/o bound
>>
File: somebody.jpg (416 KB, 1536x1024)
416 KB
416 KB JPG
>>108440123
nobodies dont even try programming. everything is 1/0 in the compukter
>>
>>108440123
>t. guy who hasn't heard about the internet, JS, Go, etc.
>>
>>108439807
just use epoll_wait
>>
>>108440834
just dont create threads and use whatever you feel appropriate in your gear. my gear only needs to check your gear is finished. the same approach that is implemented in the **multi** curl handles should expand to resolver module, stay single threaded.

also, epoll is not portable (windows has its own apis to solve blocking) and CLIB (antique) is to be replaced in my runtime.
>>
>>108441020
Threads are fine if you're not exceeding the number of hardware threads.
If you need more than that, then you should switch to a green thread/coroutine/task/fiber/whatever the name is now, system.
>>
>>108441100
please read the initial message - i switched already, it is called process-based parallelism and asynchronicity. no interruptions allowed, waiting 0 seconds is optimal. you want threads - pass by, you want prove threads are better - you cannot, because process based parallelism is better.
>>
>>108441174
process based parallelism is just shitty threads, because now the memory protection and TLB have to be reset
You can have a thread which goes to sleep until it gets a new task, like a DNS request. If you want that thread to poll for new tasks, it can, but that's kind of dumb unless you expect tasks to come in every few microseconds or less consistently.
>>
>>108441227
while your thread goes to sleep, wakeups, enters and exits critical tranny sections, locks and unlocks its zips, my uninterruptible runtime executes tasks the async (the proper) way.

while your thread exists only to sleep on the system's epoll or eeeepoll, it consumes system resources, requires tricky structures and callbacks cosplaying clean and calm event system.

so ye, write down those timings or let me decide how often i will visit your absolutely dumb simple check routine. "yours" here is a metaphor, i doubt your involvment in module building.
>>
>>108439807
>My runtime requires a single controlling loop rather than multiple small loops and locks.
Using a loop to create a multiprocess-wide context which you use as the basis of a scheduler is not really asynchronous processing. Have you considered what occurs when you get two return calls simultaneously?
Think about the implementation, consider this:
(define (sync a b c) 
(cond ((= a 1) (action_a b))
((= b 2) (action_b c))
((= c 1) (action_c a)))
(sync (eval-next-a) (eval-next-b) (eval-next-c)))[\code]

This places an order on the evaluation of the processes that is, intrinsically, blocking. Imagine if 'a' constantly returned a 1, would 'b' or 'c' ever get a chance to have their actions called?
You could argue that that is simply a detriment of the example, and that a proper implementation would be completely non-blocking. But that misses the obvious issues that form the entire field of asynchronous programming.
What if 'b' depends on the state of 'a', but that state is updated before 'b' is evaluated? (temporal issues)
What if the value of 'a' changes as a result of the evaluation of 'c'? (incorrect assurances of environment sanity within an iteration)
Now imagine that these processes do everything from outputting to a display, or reading from a network socket, or performing an FFT, or sending an email and so on. When that cond expression grows larger and larger, those issues become harder and harder to control.

I'm sorry but I have to say it, don't be naive, locks and "small loops" are the only way to do actual asynchronous programming. What you have described is to gain control of purely continuous systems by discretising them into distinct steps. This is fundamentally the point of continuous engineering in general. And is a classic approach, but naive and well worn.
>>
>>108441392
>it consumes system resources
Any thread or process still needs memory, but sleeping threads use no CPU.
>my uninterruptible runtime executes tasks the async (the proper) way.
By polling in a loop? That's retarded for real world use cases, where you don't want to use 100% CPU all the time.
Linux's futex, and more generally atomic wait and notify in c++ are excellent and designed to exactly solve this problem of ensuring fast wakeups for threads that sleep waiting on a condition.
Also, what do you mean by "uninterruptible"? I assume you don't mean that it will error if the scheduler preempts it?
Overall, you're framing this as processes vs threads, which is nonsense to start with. A process is more or less just a thread that has different memory protections and access policies from other threads. It has nothing to do with the concurrency model itself.
You mentioned using a single controlling loop, so I guess what you're saying is that you have a master process which polls for events and triggers different processes/threads to run? This is literally a shitty implementation of what atomic wait/notify does.
>>
>>108440793
how does async programming make my computer run faster if i am not using the internet?
>>
File: process-types.jpg (59 KB, 800x400)
59 KB
59 KB JPG
>>108441453
oh sweet, Lisp follower revealed as a threading cultist. i never doubted Lisp is uncapable.

> Using a loop to create a multiprocess-wide context
this is magical thinking. where those conclusions come from? each process has its own context, like those tabs in the world's dominant browser, named chromium (guess why it won over threaded based browser?). wont even comment on other silly assumptions.

>>108441495
>thread or process still needs memory

you have thread AND process, i have ONLY a process which is also clean from troonerism mentioned before.

> polling in a loop?
do you know why i put multiple "e"s before the "poll" word? that was a sarcasm.

> for real world use cases, where you don't want to use 100% CPU all the time
the loop is not your (as a module) concern, your concern is to do piece of a work and return ASAP. ill decide (as a runtime/app) if i want to consume 100% as a retard or do something smart with it.

> atomic wait and notify in c++ are excellent
we are in da C house, those threaded snake oils dont apply here

> what do you mean by "uninterruptible"? I assume you don't mean that it will error if the scheduler preempts it?

uninterruptible means no module is entitled to suspend execution, to sleep on huertex, to invoke MessageBox() or to calculate square roots in span of 10 seconds. only the "primary route" (which user programmed) can do that deliberately.

> process is more or less just a thread
> It has nothing to do with the concurrency model

my stockpile of clown awards is depleted, sorry.

> master process which polls for events and triggers different processes/threads to run?

if youre conditioned that every tool is a hammer, then i cannot blame you using a microscope to do the nail. in actuality, i did some hierarchical scheme (pic related), it may look complicated, but is much simplier than complete IPC chaos hardcoded in other langs.
>>
>>108441652
Can you give a more practical example of your design? Here's how I'd handle curl with asynchronous DNS resolution.

single thread/process
curl starts up and initializes sockets for DNS and HTTP traffic
sets up epoll on those
sends out requests and does whatever other processing it requires
then it waits on the epoll instance for responses or timeouts
when it receives a response (DNS or HTTP), it will run the corresponding function to process the response, then potentially wait on the epoll instance again if required

You could split curl and dns into multiple threads or processes and put send and receive FIFO SPSC queues between them, but that just adds extra overhead that you don't need.
You could also just directly poll the sockets instead of waiting on them, but that's a generally bad design as you waste system resources.
Chromium runs each tab in its own process for memory isolation reasons. You could use threads instead of processes, but then javascript might be able to read out of bounds memory. It has nothing to do with the concurrency model.
>>
>>108441652
>the loop is not your (as a module) concern, your concern is to do piece of a work and return ASAP. ill decide (as a runtime/app)
I think I see where what you're trying to get at here. The problem is actually deciding when to run different modules. You have to specify dependencies between them and build up a task graph.
Even though you're using C, this C++ library might interest you. https://github.com/taskflow/taskflow
>>
File: inadequate.jpg (46 KB, 500x404)
46 KB
46 KB JPG
>>108441690
> a more practical example of your design
it is already in the curl's design, the way it operates with "multi" handles

> sockets for DNS and HTTP traffic
for some reason they (author) decided to split DNS into another library/module - maybe because it was big enough, but ye, it may be the same library, then, it must be properly wired with "multi" operation. the "easy" may stay synchronous doing epoll wait or what you usually do on those (you cannot do both multi and easy).

> You could split curl and dns into multiple threads
its already split >>108439830 into either "threaded resolver" (inside of curl, calling synchronous system's api) or c-ares module that is also threaded and does effectively the same, making itself unreasonable (which is reflected in author's comment). another problem with c-ares is that it uses different storage for configuration on windows - the registry, while NIX implementation uses filesystem - this is not a portable solution!

> or processes
no, the process for name resolution is like using a microscope to hammer the nails. it must be async operation.

> You could also just directly poll the sockets instead of waiting on them, but that's a generally bad design as you waste system resources.

the CPU is not your (as a module) concern, this is how multi handles were and will be. https://curl.se/libcurl/c/curl_multi_poll.html (0 is what makes it async op) i think it went the full circle at this point.

> Chromium runs each tab in its own process for memory isolation reasons.
the reasons (you may fantasize) are secondary, the outcome is primary - threading model sucked, process model prevailed.

>>108441719
>when to run different modules
you may do assumptions based on device (network comms, keyboard input, cpu etc) or previous history or even what module suggests. mixing in a threaded c++ library is inadequate.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.