[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/3/ - 3DCG

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


New anti-spam measures have been applied to all boards.

Please see the Frequently Asked Questions page for details.

[Advertise on 4chan]


File: node spaghetti.webm (3.21 MB, 1062x598)
3.21 MB
3.21 MB WEBM
A place for Blender's best headache.
>>
>>993075
Remember when the mods nuked /3/? When we didn't have shit useless threads like this?
Yes anon, the software you use is the better one and you were right all along. You're a good boy, you're valid ¬_¬ ...
>>
>>993079
tf are you talking about?
>>
>>993075
Ahhhh, so that's how taffy is made...
>>
anybody have any good tutorials? Im stuck in amateur hell
>>
>>993082
schizophrenia, please understand
>>
What are geo nodes for anyway? Can somebody QRD me on what they do?
>>
>>993148
procedural modeling. infinite possibilities but infinite complexity.
>>
>>993187
that doesn't help me understand what you make with it
>>
File: H05EZb1897wVwhlX.webm (964 KB, 718x384)
964 KB
964 KB WEBM
>>993194
everything, you can make anything mesh-related with it. that's the scope geo nodes are. literally, go on Twitter, there are thousands of examples
>>
>>993090
Tutorials are the reason you're stuck in amateur hell. Stop watching random tutorials. Pick something you want to do or make and start taking steps to make it. Look up specific things as you go. No point in learning how to use xgen when you only do hard surface modeling and weapons for example.
>>
>>993194
Basically you can program your own mesh modifiers. Stuff that add or replaces faces and attributes. Like, want a tiled roof? You can write a modifier that'll lay tiles all over faces of a shape. Wanna make animu outlines but don't like the solidify or outline modifiers? You can write your own. Stuff like that. All the stuff that'd be done with vertex or compute shaders.

The price to pay is the way they're set up is super goddamn unintuitive compared to material nodes, like holy shit.
>>
>>993200
>>993215
Oh ok, I think I get it now. Thanks for explaining frens
>>
File: THE ANTICHRIST.png (232 KB, 694x662)
232 KB
232 KB PNG
>>993075
>look up how to do fucking anything in blender recently
>'it's easy bro, just use geo nodes!'
>said """easy""" task is the most convoluted spaghetti i've ever seen
AAHHHHHHH I HATE NODES I HATE NODES I HATE NODES I HATE NODES
>>
>>993200
Where is that webm from? Some of those actually look pretty useful.
>>
File: 21342.png (1.09 MB, 1003x909)
1.09 MB
1.09 MB PNG
On the plus side, geonodes is so overlycomplicated and hard to understand that once you actually become half way competent with them Houdini will seem like a cakewalk in comparison.
>>
Anyone here halfway good with math? I want to create a point(A) that moves between two other points(B and C repsectively).

Points B and C are given a value that can smoothly transition between 0 an 1, depending on the position of point A.

When A is closer to B, then B is the larger value. When A is closer to C, then C is the larger value.
When A and B share the same position, then B's value is full 1, and C is fully 0. When A shares the same position as C, then C is fully 1, and B is fully 0.

I suppose it's the same or similar to the concept of normalizing weights. I just don't understand how to do it exactly the way I need to. I can make multiple points normalize in a way. Where they all proportion themselves in a value of 1. But I can't figure out how to distribute the weight in such a way where 1 point can sort of "cancel out" all other points, so to speak.
>>
>>993400
https://x.com/ymt3d/status/1745316148378554531
>>
>>993075
excellent sugar work
>>
>>993453
you're describing a lerp function

a = b + (c - b) * t

where a, b and c are the point positions and t is normalised weight
>>
>>993469
So c-b gets the distance right? And then multiplying it by t scales it to the weight. And then adding b back places the points back where they're supposed to be. Do I understand that correctly? I'm a little lost. How does this assign values onto B and C?
>>
>>993473
i'm not a mathman, but a vector subtraction create a new vector.
so c - b creates a vector pointing from b to c.
the length of (c-b) is the distance between the two positions.
i've never really thought about the way the lerp function works tb h, houdini just has a built in one.
i think freya holmer has a video about it though.
>>
>>993473
>How does this assign values onto B and C?
i'm not sure what you mean here?
you can pull the query positions right / know them already?

if you want to solve for a different variable, just rejigger the equation.
also, those chatbots are good for basic stuff like this.
>>
File: cage test 01.webm (187 KB, 436x338)
187 KB
187 KB WEBM
>>993477
>>993478
Sorry if I'm explaining badly. Maybe a visual will help demonstrate what I'm attempting. I"m trying to get 1 point to sort of average itself between multiple points, in such a way that it can't escape it's "cage" so to speak.
So here, I took a regular cube, and erased the faces so the inside is visible. Then I made a separate object that's only a single vert and placed it inside of the cube-cage.
Using a bone, I can move the vert around.

Then, in geometry node, I averaged the position of the cube's points. And randomly rearranged the cubes points a bit. Keeping the irregularly shaped cube a convex shape.(I'm pretty sure making a concave shape would add complications I don't want to bother with)

Now, ideally, when moving the bone, the point inside of the cube should attempt to follow the bone with near perfect accuracy. Only stopping when it reaches the boundary of the cube. I don't expect the boundary to be exactly the same as the cube's faces. I'm guessing that averaging between multiple points would create a sort of convex shape that's pinned down by the vertices of the cube. I don't know, that's just a guess. But what I'm fairly certain on, is that if the weights are averaged properly, then if I attempt to move the caged vert into the corners of the cage, then it should sit entirely flush with the vert that defines the corner.

Does that make sense? As you can see in the webm, I'm part way there. The problem is that the points are all contributing to the average too much.
>>
>>993483
i see what you're trying to do but thinking about it makes my head hurt
i'd just check if i was inside the container geo (in houdini i would use a volume representation for this, in blender the solution seems to be raycasting) and if not, just find the nearest position on its surface and just stay there (blender seems to have a sample nearest node).

not the fastest solution, but it'll always work (+ i don't do vidya so i don't know how they'd handle it)
>>
>>993490
Yeah, I know I could do raytracing. But the raytrace node comes with its own issues. If I can get 1 point weighted properly, then I can do it with more points, and then the magic would really happen.

The crazy thing is, I think I figured out the weighting method I need months ago. But I never applied it. It only existed in my head. Now I'm trying to rediscover it.
>>
>>993483
Maybe find the nearest face, project onto that plane (easy with a dot product), and if the point is behind the plane (dot product also) then don't modify it. If the point is in front of the plane, snap the bone to the projected position.
>>
>>993483
nvm, I tried this >>993528 and it didn't work.
However, raycast does work perfectly.
You can simply constrain the vector from the box origin to the bone to a maximum length of ray_intersection_distance.
I tested it and it appears to work perfectly, both inside and outside the box
>>
File: cage test 02.webm (468 KB, 1562x802)
468 KB
468 KB WEBM
>>993540
No, you were onto something with the nearest idea. I got it to work.
This technically gets the results I asked for. But not quite in the way I need it to work. The weighting is important. I need the cage itself to have a sort of "field" to it. Where stray points inside of the cage are effected by the weight of the cage.
>>
>>993552
Interesting. When I tied Sample Nearest Face, it would sometimes pick the wrong face (like the top face instead of the side, since the edge is technically equally close) But I didn't use capture attribute, because idk how that works.
Maybe it does the same thing for you, but you fix it with the Geometry Proximity node.

>Where stray points inside of the cage are effected by the weight of the cage.
How should they be affected? Are there any points inside for which you know what projected values they should have?
it's easy to push them inward or outward, or use float curves to adjust the feel.
For example, define the 0-1 range where 0 is the center, and 1 is the plane intersection distance. Apply float curve of some kind, preserving the 0-1 range.
Then map it back to a position
>>
>>993555
update, I tried the geometry proximity node and it actually gives the closest point on the geometry to the sample point (as you were already aware.)
That's very convenient! Good to know.
>>
File: float_curves.png (25 KB, 554x345)
25 KB
25 KB PNG
>>993552
An example of the curve idea.
The curve on the left makes the points want to tend toward the perimeter of the shape, while the right curve makes them pull toward the center.
You might want to even increase the range of the curve, so that the bone has to be further outside the volume for the generated point to reach the perimeter
>>
>>993555
I've been doing nodes for like a 1 year now, and capturing attributes still doesn't really make sense to me. Actually, within the last week or so, I think the idea is only now starting to sink in. By capturing, you're basically telling blender to save your results up to that point. If you have two geometry lines, and only one of them has the capture on it, then only that 1 line will have the results saved. The other geometry line is blind to the results. So in order to make the other line see, you have sample the results off the first line, and then save(capture) the results on the second line.

I needed the vert to see some of the information from the cube. So I performed a couple transfers from the cube geometry line to the vert geometry line. But the cube consists of 6 faces, and the vert is only a single point. So how do you save 6 bits of information onto 1? Well you can't. The vert can only accept 1 face attribute at a time. But that's ok. We only need the vert to know the attribute of the face it's nearest to. So we sample the nearest face, and the save(capture) it's attribute.
>>
>>993483
Just do add points, pipe in location from oject info on ur armature and then limit it with a mesh boolean.

Also you can just go to visibility in an object's properties and make it a wireframe, no need to delete faces.
>>
File: 1703591832814890.png (152 KB, 616x442)
152 KB
152 KB PNG
i made dis
>>
>>993670
Looks nice. What are you making, something sci-fi?

>>993555
>How should they be affected? Are there any points inside for which you know what projected values they should have?
I don't know. I keep bouncing back and forth. I'm not sure exactly what I need. But right now, I'm thinking that the center of the cage should be "dense". Impassable. So calculate the center and give it a value of 0. And then the boundary of the cage is lighter, possibly more flexible. So it should have a value of zero. But it should also be impassable as well. The inner verts unable to escape the cage. Yet they can more more freely along the edges, and slowly as they compress toward the center.

I'm not sure float curve will help here. Or at least, I can't imagine how. I think I have a different idea, but it will require some stupid math and the accumulate field node, which I'm not even sure is possible. And I'll have to create a copy of the verts, unparrented from the bone. Because I need the position pre and post movement. Which is impossible as long as the verts are parented directly to the bone.

I wish Geometry nodes had at least a *basic* armature node. They don't have to give us a full suite of armature nodes. But even something as simple as knowing the positions of the bones in pose and in rest would unlock many possibilities. As it is now, you have to place empties or verts at bone locations to get that information, and it's just a pain in the ass.

>>993645
Took like 20 re-reads to understand what you meant. If that solution works, then it won't suit my needs. As it will delete points that escape the mesh. I don't want that.
>>
>>993711
why are you spending more time on this if ray intersection is working for you?
what's the use case that ^ isn't the right solution?
>>
File: mass squish.png (14 KB, 874x710)
14 KB
14 KB PNG
>>993715
I'm sorry, it's all very experimental. Which is why I'm holding back on explaining too much. Because I don't want to bog you all down with details to ideas that I'm not even sure will work.

I want to create something like mass. Even if it's kind of faked. I believe that in order to do it, points need to understand the mass they are a part of first. Creating something of a "field" capable of repelling stray invading points. But not in a strict binary way where you're either in or out. Rather in a soft density kind of way, where the mass can compress and give way to invading points.

Raycasting might help do the part where points detect their place inside of the cage. But I haven't actually tried it yet. Because I need every point to check every possible plane it can "see" while inside the cage. Which means casting 1 ray isn't quite enough.(or maybe it is, idk) I think I'll need to dynamically adjust how many rays are cast from a single point on the fly, with no upper limit. If a point can see 10,000 planes, then it needs to cast 10,000 rays to know the points of all the planes. That's not feasible with the raycast node, I'm pretty sure.

It's doable with a plane projection calculation. And then calculate if the projection is inside or outside of the planes. And then for all the points inside of the planes, get the attribute. However, that's a problem, because a projection calculation like that requires creating a shit ton of points, which slows everything down. There has to be a faster way.
>>
>>993717
Look into signed distance fields. MagicaCSG is a good example of what they can do.
>>
>>993717
>>993734
yeah i was going to suggest volume/sdf look ups again. the lookup is cheap and gets you distance from the surface for free (you can sample the volume's gradient for direction to the surface). the conversion to a volume representation will probably be the most expensive bit, but it can be pretty fast depending on voxel resolution.
FLIP and MPM sims use a volume+point method to do their thing.
blender support for that stuff i don't know about though.
>>
>>993734
Sign distance fields hurt my brain. I don't understand the math. Do you have any resources for noobs to learn? I really need this shit dumbed down as much as possible.
>>
>>993434
Wait a minute... is this a merchant?
>>
>>993899
oy vey, I see it too... meshuga
>>
>>993434
>Houdini
Is easier and better then Blender.
>>
File: 1236667.png (716 KB, 687x696)
716 KB
716 KB PNG
>>993903
>easier
Its easier in the sense that it has a built in interface for points and primitives that actually shows you the index numbers you are working with. The fact you can do the same with attributes and the like. Also the fact that there is a built in SOP for most things already built in and multiple ways to make your own outside of connecting node spagetti like supplementing with vex wrangles.
>better
Its better for proceduralism and simulation because every facet of geometry you are working with can be manipulated in ways that are next to impossible in a normal build of blender. I can technically manipulate the color attribute using Geometry nodes and might be able to use that to do something simple like displace some geometry but I cant do stuff like converting a float value into a viscosity attribute to drive a flip sim. Blender doesnt even have a stable dops equivalent to Houdini.
>>
Trying a different approach to get deformation. Well, it's an old approach I attempted before, but I'm a little more experienced now, so I can do more than before.
I did a little test for the shoulder region. Set up a collection of empties. Copied the collection, and parented the copies to bones. Brought the two collection into nodes. One collection is treated like a rest position. The other collection is treated like a pose position. The empties can provide location and rotation information.

I set it all up so that the mesh near the shoulder empty deforms the most. Gradually deforming less as the mesh gets near the empties that neighbor the shoulder empty. Essentially creating a perfectly smooth and normalized zone of influence around the shoulder. It's perfect to fault. As with so few empties, the zone of influence is able to reach farther than desired. But that's as expected. With more empties in placed in key zones, they all would support each other, and avoid unwanted influences.

Then I added up the rotation for all the empties, and rotated the mesh only inside the isolated zone of the shoulder. The effect is jank right now, because it's still very underdeveloped. But it's looking promising so far. The idea is to make every major deformation zone be normalized to their neighbors. And with clean efficient nodes. It took a lot to set up only the shoulder with 3 neighboring empties, so I have to develop a method that automatically sets up all the masking and rotations for every zone. I have ideas for how to do that, but I'm tired and calling it enough for 1 day.

Moving the shoulder empties around, changes where the zone of influence is. Which makes for a funny warping effect.
>>
>>993075
ah that's a fine-a pasta mamma mia
>>
Are there any good geonode alternatives to skin modifier? I'd like to avoid having intermediate loop cuts, to have configurable number of sides (skin always generates kinda cuboid stuff, what if I want 3 instead of 4, or maybe I want 8), also would be nice to have configurable topology at join points, maybe configurable bevel amount or something.
>>
File: SDF 01.webm (1.64 MB, 1458x516)
1.64 MB
1.64 MB WEBM
>>995092
>>993738
>>993734
I didn't look up how to do SDF, but I think I got something like it going by accident. I was just looking for a way to fix my shitty rotations, when I somehow managed to make a mask that was oddly smooth. Normally when you create values that spread across the mesh, you can see one point blending polygonally into another. But apparently, you can create smooth transitions across the mesh.
I followed that idea, and then I realized I essentially created a space that the mesh is reacting to. And I can overlap them. Of course the first thing I did was isolate a titty.
>>
File: Black hole titty.webm (195 KB, 600x550)
195 KB
195 KB WEBM
>>995356
I tried using the sdf in a simulation node. But I still don't know how to make anything worthwhile in sim nodes, so she ends up getting sucked into her tit like a black hole.
Actually, I got better results with other configurations, but nothing so great to show off.
>>
>>995321
Also interested.
Those unnecessary loops drive me up the fuckin wall. Like yeah, I can throw a decimate on there set to planar, but it really should be simple geo from the start. Skin is great for shit like beams, rafters, and other industrial bits, but it really screws the pooch in terms of polycount and skyrockets way to fuckin quick.
>>
>>995092
You should keep that saved in another file for later, suppose later you could design some space monster that breaks reality and the deformation seen here reflects that. Could be a pretty cool effect.
>>
File: Boob Inflate.webm (1.34 MB, 720x540)
1.34 MB
1.34 MB WEBM
>>995357
I accidentally created inflation. I've been trying to figure out how to make the mesh conform to the "mass" of the field. I did kind of get it. I can make it so that when gravity is applied, the mesh sort of drapes over an invisible sphere. But the parts of the mesh that don't drape over the spear continue to fall, and it looks like a thick dripping fluid. Like melting cheese.

I need a way to stop edges from losing shape. Unfortunately, edge correction is hard. I figured I could cheat by just scaling out from the center of the field. And it created this inflation effect. The mesh is pushing out to the boundary of the field. Which, isn't perfectly spherical, because I scaled it to an egg shape to better alight with the shape of the breast. The scaler is even nullifying the effect of gravity.

Needless to say, this isn't what I was going for. But I think it's neat to be able to visualize the boundary of the field.

>>995507
There are probably simpler ways to do such abstractions. That node group is very cluttered, because I'm trying to find ways to control the rotations in a very precise way. But if all I wanted was to make stuff bend all abstractly, then I'd make something a lot simpler.
>>
>>993075
I've made a basic soft boolean, idk why the intersecting edges are not exposed in the boolean modifier and you have to do all this shit.
https://gofile.io/d/vbadU9
>>
File: 324582.png (364 KB, 1801x911)
364 KB
364 KB PNG
>>995321
>>995433
I got you covered. If you need more resolution just do a resample before.
>>
>>996058
Sheeeeit, so simple I'm honestly amazed I never bothered.
Like I've done shit way more complicated than that. Granted, I haven't actually run up against the issue I mentioned with the skin modifier since I actually learned GN.

How well does it work with multiple, connected/unconnected edges? We ain't just usin the skin modifier to make single curves now. I use it for scaffolding and railings and stuff.
>>
>>996055
im not downloading that, just send a screenshot
>>
>>996102
I can't upload imgs here because my new isp is blocked, it is dynamic or something, anyhow it only works with simple geometry that doesn't fuck with the bevel modifier.
https://imgur.com/a/1gLFT0D
>>
>>996108
upload the image to https://catbox.moe/
>>
>>996109
Sure, whaterver.
https://files.catbox.moe/o1al2n.png
https://files.catbox.moe/2vjzmw.png
>>
>>996102
I forgot to add that you have to create a vertex group with the same name that you put in the node tree to use it in the bevel modifier.
>>
File: Ark Shadow Test 01.webm (682 KB, 720x720)
682 KB
682 KB WEBM
Trying to create a cast shadow using nodes. No lights. Instead, placing an empty in the scene, and then the geometry reacts to the empty in such a way that the empty is essentially a spot light.

The first part is easy. Just subtract the location of the empty from the position of the mesh. Then, get the normals of the mesh. Then combine them into a dot product. That will "light up" the parts of the mesh that are facing the empty. Very easy light and shadow. The problem is that there is no casting shadows. That takes more effort to pull off.

My poor solution, was to cast rays from the empty, aiming toward the mesh. The rays that return to their points interrupted are assigned 1. The rays that return a shorter distance than their points are labelled zero. 1 points are light, 0 points are dark. It creates a cast shadow. Combined with the light made from the dot product, and you essentially have a working light.

The problem is that casting rays from only the points creates the jaggedness you see in the webm. I don't know how to make it smooth. I tried a number of things, but nothing works. The most likely to work is probably a technique called "shadow volume". It's basically where you extrude the along the angle of the light. and the extruded geometry that intersects with the mesh will outline where the casting shadows zones are. I can do the extrusion. But I don't know what to do after that. Like, how do I imprint the silhouette of the extruded mesh onto the regular mesh? It must have something to do with textures and UVs and shaders. But what? I'm at a loss.
>>
>>996199
You plan on doing the final renders in blender eevee? Personally, what I would do is use the existing lighting tools, pipe a Diffuse Node into a shader to color Node, then pipe that color into a color ramp to get the shades you want... What you're asking to do would requires a fair bit of fucking around, since you'd need to do what you're doing with every pixel (fragment) of your mesh that is displayed, where you're currently doing that with every face.
>>
>>996199
Pretty sure those are called shadow terminator artifacts, though I don' know if searching for them will do you any good except for people having trouble with them or Blender bragging about getting rid of them.
Why are you doing this with GN though? Shaders might work better, and you could do something pretty similar by using a straight up Normal node and changing the vector with drivers via an empty. Not to mention you get the Shader to RGB node as well.
>>
File: Ark Shadow Test 02.webm (704 KB, 720x720)
704 KB
704 KB WEBM
>>996203
Yes, Eevee. I don't want to use the existing lighting tools, because the shadows kind of suck. I'm sure you've noticed that. Cast shadows and contact shadows are balls in blender. Plus, you can't get toon shading and colored light at the same time. Because you can't tell the diffuse node to only work with a single light. The diffuse node works with all lights, all the time. Makes mixing colored lights impossible for toon shading. Pic related is impossible with normal lights. Doable by making your own lights.

>need to do what you're doing with every pixel (fragment) of your mesh
That's the idea. I'm reading around, and it seems I need to figure out how to cast rays per pixel. Which I suspect may be possible to do in shader nodes. Some of the nodes there work per pixel... I think. Some nodes are described affecting the "shading point". Which is a term that isn't explained in the manual, but I *think* it's describing individual pixels.
The Camera Data node has a Z Depth function that is explicitly described as per pixel.

The problem is that I don't know how to manipulate per pixel.

>>996226
I haven't learned drivers. I know they can do a lot of functions. But I don't really understand how they work. They feel like magic. You type random things into boxes, and then *boingo* suddenly the data is different? What? Makes no sense. I'd be willing to learn if I saw that using a driver was capable of doing something I couldn't do with nodes + if I desperately wanted to do it. Like if drivers could somehow produce per-pixel rays that shoot at the empty, that would be nice. But for just working the normals? I can set normals up with nodes very easily.

I actually do have the dot product part set up in both geo and shader nodes. But geo nodes has the raytrace node, which allows me to shoot rays. So I need that anyway, unless there is a way to manipulate rays in shader nodes in a similar way.
Geo nodes also makes the extrusion part of the process possible.
>>
>>996235
I'm not sure there's a way to have enough control to do per-pixel raycasts from shaders... You're welcome to try and report back, though.
What I would do:
>Cast shadows and contact shadows are balls in blender.
Increase shadow quality settings and only use lights which produce sharp cast shadows to improve this. It'll never be perfect unless you use cycles, but it can be made workable.
>Plus, you can't get toon shading and colored light at the same time. Because you can't tell the diffuse node to only work with a single light. The diffuse node works with all lights, all the time.
You can, by using render passes and compositing. It's a bit unintuitive to initially setup in blender, but once you put it together it's a valid workflow.
>>
>>996239
Compositing eh? Perhaps. I guess I could try that. But that doesn't directly help in what I'm attempting to create. It's a "good enough" solution. Not a real solution. If I can do what I want using either geometry nodes or shader nodes, then I can manipulate lights in real time, and not have to render every time I make a minor change, just to see what that change looks like.
And if I could somehow manipulate the per pixel rays to see from the location of the empties, then I could draw perfect shadows. And then the lighting system will actually work, and not half work.

I have the general idea in my mind now. Because I found a video of a guy who created his own render engine inside of geometry node. He's an absolute mad man. Here's a timestamp to when he starts to set up the cast shadows: https://youtu.be/FqQYNdQLUDA?t=2751
But you should at least skim through part 1 and 2 in order to understand what he's doing.
I'll just tell you: He's created a grid with the dimensions of your typical computer screen. 1920x1080. Then he fires rays from a focal point just behind the grid, toward the objects of the scene. He gets the hit, and transfers the data to the corresponding block in the grid. The blocks are effectively pixels, and the grid is effectively a screen. Doing a bunch of math, he can capture shape and depth and it renders the scene onto the grid.

For cast shadows, he just takes the first ray hit, and then does another raycast toward the light. Any ray that's obstructed by an object before reaching the light toggles the block grid off. Which makes pixel perfect shadows for his grid. It's really that easy. If only I could cast a ray from the view to the surface, and then cast a second ray from the surface to the light, then I could make pixel perfect shadows. The possibility it right there within my reach. I can feel it.
>>
>>996241
Oh, yeah this should work. Geonodes operate on geometry, so making the geometry the pixels seems like an obvious solution, in retrospect.
>>
>>996242
>Oh, yeah this should work.
How though?! I don't want to create my own renderer. I just want to do the shadow trick and move on with my life.
>>
>>996244
Well... If you want to forcefuck per-pixel computations into geonodes you gotta do pretty much exactly what you've laid out in the post right above. Can't really skip any of the steps.
>make grid the size of your render
>cast from them away from the camera
>if you hit a mesh, cast from the hit toward the light
>if you hit a mesh, shade as shadowed
>if you reach the light, shade as lit
Desu I'm not sure pixel perfect shadows are worth all that compared to conventional methods, but it would work.
>>
>>996246
This is bullshit, man. Even if I did all that with the grid, how would I make the results mesh with the rest of the scene? I would have to put the grid directly in front of the camera. The grid will get colored, and block the camera. I suppose I would have to deleted all of the pixels that aren't shaded, in order to prevent them from obstructing view. I dunno.... doesn't sound quite right. And not as convenient as I was hoping for.
>>
>>996248
well, yeah. No one said near-realtime pixel perfect shadows were gonna be easy, there's a reason we've come up with all this bullshit to do them kek.
I think the idea would be that the grid IS the view, so that it's not obstructing anything, it's actually what you want to see.
If you don't wanna fuck with that, you would probably have a nicer time trying to do it with custom rendering in Unity or Godot.
If you wanna stick to blender... Maybe you could try to write your own renderer and hijack the EEVEE raycasting in 4.2 for your own purposes. No idea how to go about that, but it's possible.
>>
>>996249
It could be easy. If only there was a way to bounce the pixel rays. Just one bounce. That's all I need. That's not asking for a lot, right?

Did they patch 4.2 to not have shit performance?
>>
File: Empty Based Light.webm (994 KB, 1200x1200)
994 KB
994 KB WEBM
>>996235
>But I don't really understand how they work. They feel like magic. You type random things into boxes, and then *boingo* suddenly the data is different?
I mean yeah you can do it like that, but I think most people now just right click values, and copy/paste them as drivers.
At its most basic level, you're just linking thing to another. Think of it like parenting, but for values.
Like say you want to link the z rotation of an object (or any object) to the X location of another, so when you rotate the first object the other object moves by the same amount. You just right click the Z rotation of the object, copy it as a driver, and paste it into the X location of the other. Ez pz driver. If you want to get fancy then you can introduce math, or just go to the driver curves and adjust the values there to get things how you want it.

As for your light based empty, have you tried something like this? The empty just uses a dot product with the normals to color different parts of the mesh. Here I'm using black and white, but you can do whatever with it.
>>
>>996251
>Did they patch 4.2 to not have shit performance?
It seems like there's a bug giving some people shit performance, but it works pretty good on my machine (tm).
>>
>>996257
>>996235
I forgot to mention that the rotation and scale of the empty can change the light as well, not just the location.
>>
>>996257
NTA, but that's the easy part, the issue is he wants cast shadows too.
>>
>>996260
Ahhh, I've fundamentally misinterpreted the issue then.
I think I've done something like that in the past though, so lemme see if I can dig it up.
>>
>>996257
Texture coordinates?! I wasn't expecting that. But yeah, like the other anon said, that's the easy part. cast shadows is what I need. Though your method of doing the dot product uses 1 less node. So I'll probably copy it from now on, in order to save space.

>>996258
My machine is a normal desktop PC with windows 11, AMD Ryzen 7 5700, RTX3070, and 16GB of some kind of ram, I'm not sure. So basically, it's a half decent PC using popular parts from regular brands. The fact that 4.2's performance goes to shit on my machine, I can hardly think of as a "bug". More like they didn't even fucking test the thing on any hardware but their own. then they released it with their fingers crossed, praying it wouldn't fuck up, which it did.
>>
File: Ark Shadow Test 03.webm (591 KB, 720x720)
591 KB
591 KB WEBM
The extrusion method, or the "shadow volume" method, looks the most promising. You can see how the extrusion creates cast shadows on the body in appropriate places. If I could somehow grab that as a mask, I might have something. Or perhaps get the difference between the depth of the extrusion and the depth of normal mesh, and then the less than 1 distances will create the mask. That might not work either. Because the ray has to count 1 for the front of the extrusion, and then 1 again for the back of the extrusion. In order to determine if it's inside or outside.
>>
>>996076
>How well does it work with multiple, connected/unconnected edges?
It works fine unconnected but starts to break when connected.
>>
>>996292
Then yeah, not quite a 1:1 replacement yet. Should be interesting to see how the OG modifier works when the devs eventually convert all the modifiers to GN. Something I'm both curious and extremely scared to see. They're gonna fuck up a lot of shit with half-assed implementations. Just like bloom.
>>
Did a quick experiment to learn how geometry nodes and shader nodes handle proximity differently. The position of the mesh is getting compared to the position of an arbitrary point in space. And then a distance threshold is established. In this case: It's the value of 1.7. I chose that number arbitrarily too. Putting the distance into the threshold, will make every point inside of the threahold a value less than 1, and everything outside of the threshold a value greater than 1.

As you may already know, blender automatically blends the values from point to point. So if 1 point has a value of 1, and another point has a value of 0, then it creates a gradient between the two points. Normally, these gradients look squarish, as they follow the edges of the planes. However, you can manipulate this by pushing 0 down further into the negatives. And 1 up farther beyond 1. When you do that, the gradients along the edges shift as either black or white dominate the other. With precise control, you can make the gradients appear circular.

However, this does require dense enough geometry to create the angles of the circle. If you try it on a 1x1 plane, it will still appear blocky. But with a 4x4 plane, it begins to fool they eye. If you scale the mesh to the threshold, you can get more accurate interpolations. I did last before, but I forgot how already. The interpolating I mean. I remember how to scale. But not what comes after that.
>>
File: Shader interpolation.webm (672 KB, 784x724)
672 KB
672 KB WEBM
>>996489
However. Shader nodes works differently. It's not interpolating between points of the mesh. It's doing something that I don't quite understand. But it's a lot more accurate. Making a perfect circle. It doesn't matter how much or how little geometry you have on the plane even with 1x1, it still makes the perfect circle.
>>
>>996489
>>996490
that's basically the shader working on a per-pixel basis and the geonode working with verts (AKA geometry).
>>
File: 001.jpg (1.35 MB, 2904x1652)
1.35 MB
1.35 MB JPG
Yo yo yo dudes, I'm trying to make some power lines, but I'm having trouble. I got the poles to gen, and tilt, and I even have wires going between them that match the tilt. Problem is, the wires don't follow a smooth curve and just come to sharp points.
I'm not really sure how to fix it. Any help would be appreciated.

If someone wants to take a look at it, here's a catbox link. The node graph is a bit to large and wild to use a screenshot, and to expect someone to re-create it from a screenshot.
https://files.catbox.moe/iuk6gk.blend
It'd be super nice to have a proper mathematical catenary curve between em, but honestly I'd take any smooth curve at this point.
Thanks in advance,
>>
File: power poles.png (233 KB, 938x491)
233 KB
233 KB PNG
>>996497
I made an attempt. My math is shit, so it's probably incorrect. I didn't even bother learning what a catenary curve is. I wanted to fix how the lines are aligned. But that seemed like a daunting task. So after fiddling with some nodes, I decided to let them be and focus on the drooping issue.
https://files.catbox.moe/9kc7n5.blend

Adjust the node labelled "odd" to make the lines more or less smooth.(Keeping the integer odd, cuts the cable in half.)
Adjust the node labelled "droop" to make the lines more taut or slack.
I deleted the map range and random node. Because I wasn't sure what they did, and they were in my way. No special reason.
You could probably improve on this. I did shoddy work. But it's curves now, so eh.
>>
>>996515
Thanks bro. Works pretty well enough for my needs.
The alignment issue was an easy fix, just nudge the curves forward a bit, since the beam they align to is a bit forward of the origin point of the pole. Just had to add a few nudges to the Y value of the first Set Position nodes in the "Wire Create" section for the curves.
As for the random value and map range, they were my attempt at adding a bit of random tension to the wires, so they'd droop at a common variable, but some might be more/less droopy than others. Didn't quite work as intended and just added noise to em. I thought maybe I could add that randomness to the droop factor in yours but it seems like it did the same thing.
Honestly it's a really small thing that doesn't matter, so it's all good in the hood.

Could you explain a bit about why what you did worked? It doesn't look too different to what I was trying to do, though it seems like you turn yours into a mesh before drooping it. Which I guess that's the important bit.
No worries if you don't feel like it. Still, that was a big help, keep an eye out in the wip thread for it.
>>
>>996560
Well first of all, you didn't subdivide in the file you uploaded. You had the subdivide node set to 1 cut. Which is not enough cuts to work with. I upped the subdivision. That made it look something like a droop, but it looked like a bad droop. So I had to redo the math to make it look smooth.

In order to do the curve calculation, I had to know the distance between each two anchor points along the wire. I didn't know how to easily do that, so I had to use the shortest path node to get the same kind of information. I didn't know how/if the shortest path node works with curves, so I had to convert the curve to a mesh in order to use it. So converting to a mesh was merely a necessity for use of the shortest path node. Not mandatory if you can think of another way to get distances. Like say for example, creating points where the wires should be anchored, and then instancing wires onto the points. So you have individual wires between each pole, rather than one long wire. That way, you could sample the length of each individual wire.

But anyway, what shortest path node does, is if attempts to reach a destination in the shortest possible moves along an edge. And it will count up how many moves it made to get their, by what they call "weights". Since you already defined what the anchor points are by using their proximity to the pole, I just used that to define where the end points were, and then the shortest path node drew a weighted path towards each end. The points farthest away from the anchors will end up with the highest weights. That just so happens to be the points in the middle of two anchors. And then the points gradually decline in weight going from the middle to the anchor. That's why it's important to make the subdivision odd. Because a cut at exactly the halfway mark, will make sure the weights from middle to end are perfectly even.

Finally, using the gradual weights, you can do the wacky angular math to make a curve.
>>
>>996565
Ahhh, I think I get what you're saying. At least, the way you've explained it makes sense to me. I doubt that I could have figured that out on my own though. Up until now I've mostly been fucking around with general stuff like set position and the like, shortest path wasn't even on my radar, let alone knowing where it would even apply.
As for the subdivision, I had it turned up, but I turned it down since it didn't seem like it was doing any good, but I didn't quite want to get rid of it just in case I could figure something out with it.
Thanks again broski.
>>
>>996566
Yeah, the noding experience is like that. Where you didn't know a function exists, until you need it, and then after some trial and error, you find it was there all along. I still don't know what half of the curve nodes do.

Shortest path is a powerful node once you learn how it works. It's pretty much made for manipulating many mesh lines at once. Most tutorials will teach you how to make fractals with it. But I've no interest in fractals. You might benefit from learning fractals though, since you're doing an out door scene. Might be able to create nicely randomized trees by manipulating the values of fractals randomly.

Maybe this tutorial will help.(I don't know)
https://www.youtube.com/watch?app=desktop&v=nXCSMO4iioA
>>
File: Animate along curve.jpg (128 KB, 1260x570)
128 KB
128 KB JPG
>>996570
>Where you didn't know a function exists, until you need it, and then after some trial and error, you find it was there all along.
Sounds like me and the map range node. For years I had a node group I made called "color value" that took a mask factor and mixed it between two numerical values. I used it all the time. Then I find out later on accident that the Map range node both exists, and does the same thing but better (the different interpolation methods were a fuckin godsend).
Needless to say, map range is probably among my favorite nodes by now.

Still, thanks for the vid. I'll give it a go. I mainly use an external tree program for foliage (Speed Tree), but any info is better than none, and there's things that fractals are useful for.

I feel bad for asking, as I feel like I've asked a bit too much, but is there a way to animate the poles along the length of the curve? I've given it a few attempts from solutions I've done in the past (pic related is something I've referenced before for things), but can't seem to figure out how to do it with this one. I've had "some" success in some of the attempts, and I can get the poles to move along the curve, but it either fucks up the cables or does some weird shit where it wraps the cables back to the start.
I'm trying to get them to loop.
>>
File: Ark Shadow Test 04.png (327 KB, 827x558)
327 KB
327 KB PNG
>>996278
>>996257
>>996246
Did the grid method. Didn't follow the video. Just did my own jank version. I already had a node group that scales points in the fashion of a camera. So it wasn't hard to get the rays to shoot in the correct direction. though my camera scaling node doesn't fit mathematically perfect to the actual camera, I can still just manually type in a number that gets it as close as I can eyeball it.

Anyway the important part is that I can make the cast shadow very easily when I can shoot the rays in the correct direction. It's actually so easy once you grasp the concept. But the grid slows the animation down by a lot. This isn't feasible for real time viewport activity. And it's tied to the camera, because geometry nodes doesn't have any concept of the viewport view's location.

What I need to make this work fast, is the ability raycast in shader nodes. Not in geo nodes. Then I wouldn't have to set up this stupid slow grid.
>>
>>996782
>because geometry nodes doesn't have any concept of the viewport view's location
I think you could hack this by attaching some object to the camera that you then use as a geonode input. But yeah, this is gonna be super fucking slow.
>>
>>996784
>attaching some object to the camera that you then use as a geonode input
Why even do that?
There's an "Active Camera" node. It automatically knows which camera you're using. Just grab that, plug it into an object info node and you're good.
>>
>>996784
>>996799
I don't think you guys understand. Knowing the viewport view's location and knowing the camera's position is 2 entirely different things. I don't want to have to go to the active camera every time I need to check if the shading is correct. And have to manipulate the camera to check different angles. It's bothersome
I want to see the effects of my work directly from any angle instantaneously in the standard viewport view.

Which would be totally possible, if one could raycast pixels in such a way that detects hits on positional geometry. That, as of now, seems impossible. Which sucks. I've been trying to think of different ways to detect "hits" in shading, but no good ideas come to mind. The texture coordinate node has a "camera" input. Connecting a length math node to that, gives you a per-pixel distance between the viewport and the geometry. That's a start. Somehow, Taking the camera data and and location of the empty, I can get a per pixel distance between the geometry and the empty. This is great. You add those together, and you have the bouncing distance from camera to geometry to light.

Now what I need, is to get the truncated distance from the camera data to the empty that results from hitting the geometry. Just that 1 thing, and cast shadows are cracked wide open. But how to get that without a raycast function? Perhaps there's a manipulation using depth? Maybe detect which pixels are on the same level of depth in relation to the empty, as viewed from the camera, and then assign the nearest pixels white, leaving anything not the nearest black. I don't know. Thinking about this makes me delirious.
>>
>>996809
Desu I'm not sure how you do this without getting your hands dirty with some of the rendering backend. Either by reimplementing your geonode setup for the viewport, or by creating an arbitrary raycast shader node, or by fucking with whatever the eevee raycasting feature on 4.2 does.
>>
>>996813
Yeah, that's what I'm thinking. I'll have to do something on the backend. Which I have absolutely zero experience with. I can't code. No python experience whatsoever. And 4.2's performance is unusable for me.(at least, the last time I checked a month or two ago)

I'm frustratingly close to something amazing, but out of options. Feels blueballing as fuck.
>>
>>993075
ive never been able to get geometry nodes to work outside of blender even when texture baking it never bakes how it looks on the nodes, why is this?
>>
File: Ark Eye Test 01.webm (1.22 MB, 1152x914)
1.22 MB
1.22 MB WEBM
I hope you guys don't mind me using this thread for shader stuff. I'll go to another thread if it's a problem. But while this is primarily shader node work, I do use geometry nodes to store a bunch of data. Two empties are set up as eyeballs. And the empties track a third empty that's controlled by the armature. The geometry node captures the positions and rotation of the empties.

I'm working on completely procedural anime eyes. I did something like this before, but the previous method required making UV maps for each eye. This time, no UV mapping is required. It's almost as if I cut a hole into the eyeballs, except with shaders. And use that hole as a mask.
I figured out how to scale the hole to be bigger or larger. How to changes the shape from circular to oval. And how to rotate it, so it tracks along with the character's pose. And then by creating masks within masks, I can color in the eye with a pupil and highlight.

Because It's all made from the ground up, I essentially had to recreate the mapping functions that the mapping node would normally. I'm pretty proud of this. It's pretty tough to work out how to do this. But so far it appears to render faster than the previous method. Require fewer empties than the previous method. Require less geometry node math than the previous method. And isn't reliant on UV unwrapping to stay even. So far, so good.

I still need to figure out how to restrict the eyes, so when they're tracking to the far sides, they don't roll all the way to whites.
>>
File: roller coaster.jpg (325 KB, 2063x908)
325 KB
325 KB JPG
How do I get an array of a specific number of objects to follow a curve? I'm trying to make a roller coaster train.
I want an array of like 6 objects that follow a curve along the length.
I can't find anything about it. Every fucking video/question just resamples the curve to a specific number of equidistant points to distribute things along the entire length. It's like, yeah I KNOW how to do that, it's fuckin simple. The fact that I cant seem to find what I want probably means it's not so simple... or I'm retarded. Probably both.
I tried doing pic related, but it doesn't work. The first car follows the curve and rotation, but the rest are off the curve and copy the rotation of the first one.

I'd use the regular ol array modifier+ curve modifier, but I don't want/need them to deform, and they're collections which kind of puts that out of the question.
>>
>>997207
>Every fucking video/question just resamples the curve to a specific number of equidistant points to distribute things along the entire length.
Sounds exactly like what resample curve does?
>>
File: Coaster Test.webm (979 KB, 1562x872)
979 KB
979 KB WEBM
>>997203
For the matter of only 1 car following the curve: it's most likely because you didn't tick the "separate children" box in the collection node. And remember to apply any transformations you've made to your curve. Toggle between "original/relative" too. You might have to flip between one or the other depending on where object origins are, and all that jazz.

Anyway, I attempted a solution. I'm a noob at this, so take it with a grain of salt. But basically, the accumulate field node, allows you sort of "stack" things, by adding up a value per instance the cars stack in front of each other. And then I added a value to the entire stack in order to move the whole stack along the length of the curve.

The modulo node ensures that no matter how high the value of the stacked instances get, they will stop at the length of the curve and loop back to zero. Effectively creating a loop.

If your cars are different sizes, then you'll likely have to do some extra bullshit to space them correctly. My solution assumes they're all the same size.

I encountered a weird behavior where the curve was twisting when it looped. Don't know why. But I consulted with this tutorial, and it fixed the problem. https://www.youtube.com/watch?v=IQdT5kACMQc I couldn't find a solution on the exchange or the artist community forum. But that's why I used two align euler to vector nodes. Some mathematical magic eliminated the twisting artifact.

File: https://files.catbox.moe/iy7lhj.blend
>>
>>997220
Ahhhh. I always forget about the accumulate field node.
So yeah, this kinda works, but not quite the way I had in mind. The collection I'm instancing is a single car (multiple separate objects that make up the car), not a collection of multiple cars. Re-reading my original question, I guess I missed that bit. So that's on me.
Is there something simple I can do to instance that car collection to make up the train? That's been the biggest hurdle.
>>
>>997220
>>997283
Also, how is it that the car is instanced without actually using an instance on points node? I thought you HAD to do that to have something like that show up.
>>
File: Coaster Test 2.webm (508 KB, 1556x860)
508 KB
508 KB WEBM
>>997283
>>997284
You can set instances anywhere you want using the set position node. Same as you would a mesh. You don't have to use points to put them places. What I did, was sample a position along the length of the curve using the Sample Curve node. The node scans along the length of the curve, and figures out what the XYZ coordinates are. And then you just tell the instance to go to that position using the Translate Instances node.

An instance uses the objects origin point to place itself. So if you want to use the same instance multiple times, then yeah, you would create multiple points, and then put the instance on the points. So that's what I'm going to do. I'll place points onto the curve in a similar way that I put the cars on before. Changing the accumulate node from instances to points. And using the basic points node to count up the points. You said you wanted 6 cars earlier, so I set it to 6. It appears the rotation math still works, thank god.

And there. A single car instanced 6 times.
I barely changed anything, but here's the file anyway: https://files.catbox.moe/pu7x3k.blend
>>
Damn, Blender's doing some cool shit I've never seen anywhere. Those paired nodes that create a section between them and nodes placed inside are interpreted in context of this section, which designates a loop iteration in this case. Have you seen this in other visual programming languages?
https://www.youtube.com/watch?v=y1ogz8hesFw
>>
>>997928
Oh fuck. I was putting off upgrading, since 4.2 isn't compatible with my hardware, giving me crappy performance. But the for each node just might be the node I needed. I was just thinking how would I be able to do a distance calculation for each point against each other point. My current idea was to use a switch node and an accumulate field node, in order to make the output of the accumulate field node different depending on the index. But I couldn't fully get my head around the idea. Couldn't work it out. but if this for each node works the way I hope it does, then it should be able to do the mass distance calculation. Which means I'll have to upgrade to 4.3
>>
>>997933
This sounds like you need foreach inside foreach. It might be O(N^2) or O(N log N) if you optimize it to start inner loop from current index instead of zero - the same way as in bubble sort. I'm also interested to see if they will support nested loops straight in 4.3 or they implemented it in a weird way and only one level is allowed. Anyway, even if only one level is there, you can just connect all vertices by other means and then do a single loop through all edges. Maybe this is actually an easiest solution to the problem altogether.
>>
>>997928
Man this shit seems super neat. Wish I knew what the fuck I was doing enough to take advantage of them. I'm having enough trouble as it is wrapping my head around the basic ones.
>>
File: foreach distance.png (352 KB, 1571x834)
352 KB
352 KB PNG
>>997936
>>997933
You're speaking too mathy for me. Don't know what O, N, or Logs are. Or what bubble sorting entails.
But I did some experimenting with foreach, and I think I got it working. Starting with a cube, I randomized the points slightly, because I'm going to check distance with the accumulate node, and I want each point to have a different result, so I can test if it's adding up properly. I stretched 1 point far away from the others, because after the distances are added up, the far point will have a significantly higher distance value. Again, just to make testing easier.

Long story short: after struggling to figure out how to make it work, I finally plugged everything in the right order, and no each point has the accumulated distance to each other point. Accomplishing this with MUCH fewer nodes than before. And I didn't have to create any new points to do it. Hooray. I was afraid it would require creating new geometry.

Everything seemed to be going well, until I subdivided the cube a bunch. As you can see in the picture, I increased the cube to 11,600k vertices. That slowed the whole thing down by a lot. A half second to do the distance calculation. Which I kind of understand, because it's essentially doing 11,600^2 calculations. Or 134,560,000 calculations.

In object mode, things still work fast. But in edit mode, things are slow. I set up a little frame animation to see if it animated any faster, and it does not. It takes half a second per frame.

I guess I have to learn of a more optimal way.
>>
File: Repeat Distance.png (279 KB, 1558x778)
279 KB
279 KB PNG
>>998022
The repeat node can do it faster. I subdivided the cube 2 more times, before it reached half a second of delay.
At 11,600 verts, it's very fast. Might be useful if you only need figure out the proximity of a small amount of particles.
I wonder if there's a faster way.
>>
>>998046
No, I spoke too soon. I forgot to store the distance. For some reason, using a store node after repeat node adds an extra 80 ms to the repeat zone's time, and the store node itself takes 300ms. The new total time amounting to just about the same length as the for each version. Bizarre.

Oh wait, I see what's happening. The distance calculation isn't even "real" until I plug it into a node that will actually utilize it. So basically, nothing was doing the distance calculation until I put the store node in. Right. So both methods are about even in speed, and both slow.
>>
can you recreate zbrush curve brushes using node tools? I mostly sculpt so I don't know what's possible
>>
>>998022
Do you need distances between any two vertices, or some kind of sum of distances from any vertex to all other vertices? I'm a bit confused why your table only has one position and not two (start vertex and end vertex). Or does every vertex has its own table like this? But on picture it doesn't look like any single vertex is selected.
>>
File: foreach distance 2.png (324 KB, 1571x834)
324 KB
324 KB PNG
>>998072
What I "need"? I'm not entirely sure. It could be the sum distance, or a sum position or a sum rotation, or what have you. But basically, every point needs to be able to gather information from every other point and sum it up. Which is what I've done successfully here. The problem is the speed is very slow. The speed isn't a problem when the vert count is low. In an 8 point box, you don't feel the delay at all, it feels like real time. But with an 11,600 vert box, it takes half a second to process. That's no good. You can't use this for any complex geometry.

Anyway, more to your question: There actually are two positions. The for each node handles one element at a time, right? Starting at index 0, performing the operation you set up inside the zone, and then looping back to index 1, performing the operation inside the zone, looping back to index 2. All the way down the index

The position node plugged into the for each zone is getting the position of 1 point. And then I sample the position of the regular geometry, collecting the information of all 8 points. Then do a distance calculation between the 1 point and all 8 points. Then I sum it up with the accumulate node and repeat the process with the next point. Until all 8 points are checked against all 8 other points and summed.
>>
>>998083
Gotcha, I misunderstood your initial goal. I though you need basically a list of distances between every two points, but that's a different thing.
>>
>>998084
What do you know about tables? I was just thinking that I could probably achieve the same results in a more efficient way, if I removed all the duplicate calculations. Like if I obtain the distance between vert 1 and vert 2, then later obtaining the distance between vert 2 and vert 1 is redundant. Because 1 -> 2, is equal to 2 <- 1. So maybe there's a way to do the calculation once, and store the values for both indexes. I'm not really familiar with tables. But I drew it out on paper, and it looked something like this:

X | 1 2 3 4
------------
1 | 1 2 3 4
2 | 2 5 6 7
3 | 3 6 8 9
4 | 4 7 910

That shows which parts of the table share values. So you could hypothetically do about half of them, and then assign the duplicate values to their respective places.
>>
>>998088
Yeah, that's what I meant initially with O(n log n) and bubble sort.
Optimal loop structure looks like this:
for (i in 0..n): for (j in i..n): f(distance(points[i], points[j]))
instead of
for (i in 0..n): for (j in 0..n): f(distance(points[i], points[j]))
Starting from i inside nested loop instead of 0 makes sure to get rid of redundant calculations.
But, I'm not sure how to translate this to nodes.
Maybe using "selections".
>>
>>998091
Can you explain the concept in layman's terms?
>>
>>998114
Sorry, I don't really have experience with"selections", I just saw "selection" input in some nodes and seemingly it can limit iterations only to subset of geometry. But I'm yet to use them myself.
>>
>>998179
I have some experience using selections. They work on binary logic. On or off. 0 is off, 1 is on. Blender will take anything greater than 1 as on, and anything less than 0 as off. So you can even plug in integers and floats into selection sockets to use them.

Selections isn't really the part I need to be explained. It's the loop structure you wrote out. What is it doing? I don't know what it means. If you explained the concept, then I might be able to figure out a way to do it in nodes.

I've used selections inside of a repeat node before. One of my nodes takes a point cloud, and figures out which points can "see" each other in such a way that recreates the voronoi triangulation. It does this by first creating duplicate points for every possible point combination, and then deleting the duplicates that can't see each other. It knows which ones to delete, after doing a dot product and then making a selection out of all the duplicates that are on the wrong side of the dot. It loops through each point, adding up selections with each iteration. When the loop is done, the output is selection is an accumulation of all points that need to be deleted, and then you just plug it into the delete geometry node.

I doubt that will work for the problem we're working on. Because you don't want to create every possible combination first. You want to reduce combinations with each iteration. Is that why you need a loop inside of a loop? In order to count down the iterations of the nested loop?
>>
>>998191
Yeah, what I described is very-very simple, it seems what you did there was quite tricky. And thing I'm describing is this just super basic imperative programming.
for i in 0..n : do_something(i)
is a basic "for loop" you'd see in almost any general purpose programming language,
what it does is it runs code inside of it with value of i starting from 0 and increasing every step until it reaches n
so it basically runs
do_something(0)
do_something(1)
...
do_something(n)
The last iteration might be do_something(n-1) in some programming languages and do_something(n) in others (that's just deliberate choice programming language authors pick).
Now when you have "for loop" inside "for loop" you can use that variable of outer loop to define your inner loop.
for (i in 0..n): for (j in i..n): do_something(i, j)
will run like this
do_something(0, 0)
do_something(0, 1)
...
do_something(0, n)
do_something(1, 1)
do_something(1, 2)
...
do_something(1, n)
do_something(2, 2)
do_something(2, 3)
...
do_something(2, n)
And so on.
Which means in your table it basically correspons to walking through cells on the right hand side of diagonal going from top-left to down-right.
Now in Blender nodes there probably not direct "for loop", but "for each" loop provides current index right? This means that it can be used as outer "for loop".
But for inner for loop you'd need some mechanism that can take data only from points starting with index provided by outer loop.
I had an impression that this is possible with selections, but I'm not sure.
There should be some nodes that take start index. Maybe "sample geometry" or something like that?
Node that can provide all geometry you need starting from given index will basically be your inner "for loop".
>>
File: for each selection.webm (647 KB, 1506x758)
647 KB
647 KB WEBM
>>998201
I halfway understood that. But only halfway.

Maybe you can work it something like this. I tried various nesting configurations to no avail. Nesting loops just seem to make everything exponentially slower. It starts slowing down at a mere 50 iterations. forget 11,600 iterations. That just stalled everything indefinitely. Forcing Blender to close and reopen to get back to normal. I might not understand how to do nesting properly. It's very possible that I'm setting it up wrong.
>>
>>998256
Oh damn, I actually made it work with only repeat node. As you can see, only the right diagonal of table is filled here, which means same distances weren't computed twice. To fill the other half of table with same values I could have used another "Store Named Attribute" with different ID condition for selection. The thing is, table like this is only useful for debug purposes I think. When solving actual problem, instead of storing those distances as named attributes, you'll use them in some other computations that also have both vertices available to them. So for example, in practice I can imagine this being used to create curves between all pairs of vertices and set their radius based on distance between points, something like that.
>>
>>998278
Oh, there is some tricky indexing going on here, instead of what I described in previous post, this loop structure is more of
for (i in 0..n): for (j in 0..n-i): do_something(vertex[i], vertex[i+j])
So, it's the same computations but a bit more complicated indexing workout.
It will also run exactly like this (the same as loop in previous post):
do_something(0, 0)
do_something(0, 1)
...
do_something(0, n)
do_something(1, 1)
do_something(1, 2)
...
do_something(1, n)
do_something(2, 2)
do_something(2, 3)
...
do_something(2, n)
>>
>>993717
Metaballs?
>>
Is there a way to kinda "batch" bake all the selected GN objects?
I've got like 1000 of them that I need to bake, so it's kinda out of the question to do it by hand.
They're not complex, just instantiating a random object from a collection using the location of the GN object as the seed (I use it to pick random buildings from a set to make a city).
Normally I wouldn't bake it, but I kinda need to move the city without the buildings changing. I guess I could use a set position offset, but all the buildings are also rotated randomly, so it's not just a simple matter of moving it with a set position... I think. I guess maybe if I could somehow move on a global axis, it would work though.

Not really sure what would be easier... Baking or shifting.
>>
>>998347
Does the bake node not serve your purposes?
>>
>>998348
>I've got like 1000 of them that I need to bake
1000 individual objects.
Seems like bake node only does it FOR that object.

Would it be super retarded to use the collection that the city instances are in, and instantiate the entire collection and move that?
>>
>>998350
Fuck... call me a retard, but it fuckin worked.
>>
Oof, tell me about it.
>>
>>998278
Can you organize this? I keeps saying to myself, "I'm going to recreate this later". But every time I look at it, the random spaghetti demotivates me.
>>
>>993075
Is there a way to fake cloth physics with geonodes. I just want something that looks half decent, but runs well.
>>
>>999466
Yes, but it's too advanced for anyone here. I saw a youtube video about it some months back, when I was trying to figure it out. I doubt the performance will be any better. Geometry nodes either has to do light work, which is fast and easy. Or it has to do heavy work, which is slow and sluggish. There isn't much in between. Cloth physics is heavy work. You're going to get sluggish speeds, that are as slow, if not slower than the built in cloth physics.
>>
File: depreciated.png (239 KB, 2224x963)
239 KB
239 KB PNG
whats the replacement for these "rotate euler" nodes in 4.2? i made this myself just through trial and error i had no idea what exactly i did, if anyone knows
>>
File: mailCoif.jpg (339 KB, 1737x1170)
339 KB
339 KB JPG
>>1000031
what its supposed to do, but the rings no longer align properly
>>
>>1000032
>>1000031
Here's what the manual says:
https://docs.blender.org/manual/en/latest/modeling/geometry_nodes/utilities/deprecated/rotate_euler.html
>This node is deprecated, use the Rotate Rotation Node instead.

It redirects you here:
https://docs.blender.org/manual/en/latest/modeling/geometry_nodes/utilities/rotation/rotate_rotation.html
>>
>>1000033
oh yeah, i was too retarded to make "rotate rotation" work, ill try again
>>
File: Ark Shadow Test 05.webm (452 KB, 720x720)
452 KB
452 KB WEBM
>>996782
Got another piece of the puzzle. This video dropped information in passing. https://youtu.be/M4v_hfGF4EM?t=1951 It was very brief. Quite literally 30 seconds. And the image of the nodes difficult to see. However, I managed to work out what he's doing... for the most part. Check out the image. It appears to be casting real shadows. I'm using the extrusion method to get the shadow zones. And then making them transparent using a refraction node, and then comparing the light color inside the refraction area, with the a copy of the light color. When the two colors match, it changes the factor to a mix node, which displays part transparency, and part shadow color.
Only the skin material has it applied so far. So that's the only one casting a shadow onto itself. Look at the neck. The head is casting a shadow onto the neck.

There's a problem. Look at the bottom and right side. The effect fade out at the edges of the screen. I'm not sure why that is. At first, I thought that's just how screen space stuff works. But then I took a second look at the video, and his shadows don't disappear along the edges. So I must be missing something. I hope I can solve that issue.
>>
>>1000243
i watched his short film and it fucking sucked. couple of white weebs thought they could move to LA and make an anime. imagine putting all that time and effort only to make absolute slop and not even get any views at all. they are everything that's wrong with the west. I hope they get anal fissures.
>>
>>1000243
From render settings -> film, change the overscan
>>
>>1000311
Thanks. Already tried. It's broken. Apparently it's well known for not being functional, and requires a plugin. Or I could be completely fucking wrong. The guy in the video says his trick works in vanilla blender. So if you know the trick to get overscan to work in vanilla, I'd greatly appreciate it.
>>
>>993075
Hmm.. today I will not learn about geometry nodes because
>>
>>1000409
still stuck on rigging
>>
File: NodevemberPrompts2024.png (392 KB, 900x900)
392 KB
392 KB PNG
A reminder that we have Nodevember running this whole month:
https://nodevember.io/
I'll personally do at least one of those.
>>
>>1000895
You are really that desperate.
>>
File: 1728792201615147.jpg (93 KB, 1024x772)
93 KB
93 KB JPG
>>1000898
Yes, programming geometry in nodes is my only chance to get some attention I crave for.
>>
>>1000895
I don't know what to do with those prompts. They can't honestly expect us to make a whole procedural environment or character in only a couple days. The prompts are all very large and involved. An alchemist lab? Like a WHOLE lab??? So we have to model like a dozen objects, and then come up with some kind of node thingy on top of that? Like what? Procedural smoke? Procedural liquid? It's too much. There's a guy on /3/ right now taking months to finish an alchemy lab. And this guys is like "mmm, you can do it in 2 days." What?
>>
File: GbZmYtzawAA7OuW.jpg (365 KB, 1618x1000)
365 KB
365 KB JPG
>>1000914
Have you read the site? It's not competitive or anything, but works are posted on twitter or mastodon with hashtag #nodevember.
So you can take a look and see what people were able to achieve procedurally in limited time like this.
https://x.com/hashtag/Nodevember?src=hashtag_click&f=live
Some people do renders, some even do animations:
https://x.com/criscat/status/1852518936140558364
https://x.com/nullworks/status/1853040379064451466
Quality is wildly different depending on skill level.
It's not a competition.
>>
>>1000916
Never said it was a competition.
>>
>>1000914
> Procedural smoke?
One of the things I did today was animated fog. I think the same setup could work as a smoke with certain tweaking. It took maybe 20 minutes, half of that was watching volumetric fog tutorial on youtube. Also it terms of relevancy to prompt, some people who only do substance just post a somewhat relevant texture and it's also fine.
>>
>>1000914
Oh and btw,
> So we have to model like a dozen objects
Most people are creating all geometry with nodes as well, but I think it's fine to use existing, it just has to be highlighted that geometry isn't procedural.
Here's my progress on geometry so far. It's almost done in terms of geometry actually.
>>
>>1000916
>It's not competitive or anything
It's a waste of time that only benefits the people who engineered by committee a broken product.
>>
File: l5zplw6o6msb1.jpg (160 KB, 1170x1146)
160 KB
160 KB JPG
>>1000923
If you never participated in similar, it's worth trying. I personally enjoy stuff like this.
Also it's quite memorable and rich experience compared to just randomly trying some ideas outside of bigger projects or doing tutorials for study.
>>
>>1000895
Man, that list is really fuckin gay.
>>
>>1000933
Why though? You don't like fantasy?
>>
>>1000947
I like fantasy. LotR is cool, TES was pretty neat before it started being shit, I'm a sucker for high-tech fantasy shit, but the list looks like it was made by a bunch of people on Twitter with flags, drawn avatars, and pronouns in their name.
Like the "OMG I LOOOOVVEE DND" crowd that only started playing it because they saw it on Stranger Things. Or pretend to be SUPER into Balder's Gate because they just played 3.
Like I'm not trying to gatekeep or anything, fantasy is as generic as it gets, and I'm sure I'd enjoy those things as well, but something about it just seems off to me. Just fake people all around, and that disingenuous kind of "like".

I dunno, maybe I'm just an overly cynical fuck these days. I'm sure my opinion would probably be different if the list wasn't covered in glitter and faggy fonts.
Honestly, I guess I'm just being a faggot. It's an okay list. Still kinda gay though.
>>
File: NodevemberPrompts2019.png (150 KB, 1326x981)
150 KB
150 KB PNG
>>1000983
Yeah imo the main title looks like shit this year, I like prompt borders though. Prompts themselves are nice and consistent. They sound like something I would expect fantasy environment name generator to produce, all kinda equal semantic content but varied entities in question. 2022 and 2023 prompts are more normie. 2019-2021 prompts are more general. 2019 design is kinda PROPER, even though it doesn't actually make sense to have nodes like that.



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.