[a / b / c / d / e / f / g / gif / h / hr / k / m / o / p / r / s / t / u / v / vg / vm / vmg / vr / vrpg / vst / w / wg] [i / ic] [r9k / s4s / vip / qa] [cm / hm / lgbt / y] [3 / aco / adv / an / bant / biz / cgl / ck / co / diy / fa / fit / gd / hc / his / int / jp / lit / mlp / mu / n / news / out / po / pol / pw / qst / sci / soc / sp / tg / toy / trv / tv / vp / vt / wsg / wsr / x / xs] [Settings] [Search] [Mobile] [Home]
Board
Settings Mobile Home
/3/ - 3DCG

Name
Options
Comment
Verification
4chan Pass users can bypass this verification. [Learn More] [Login]
File
  • Please read the Rules and FAQ before posting.

08/21/20New boards added: /vrpg/, /vmg/, /vst/ and /vm/
05/04/17New trial board added: /bant/ - International/Random
10/04/16New board for 4chan Pass users: /vip/ - Very Important Posts
[Hide] [Show All]


Happy Birthday 4chan!


[Advertise on 4chan]


File: node spaghetti.webm (3.21 MB, 1062x598)
3.21 MB
3.21 MB WEBM
A place for Blender's best headache.
>>
>>993075
Remember when the mods nuked /3/? When we didn't have shit useless threads like this?
Yes anon, the software you use is the better one and you were right all along. You're a good boy, you're valid ¬_¬ ...
>>
>>993079
tf are you talking about?
>>
>>993075
Ahhhh, so that's how taffy is made...
>>
anybody have any good tutorials? Im stuck in amateur hell
>>
>>993082
schizophrenia, please understand
>>
What are geo nodes for anyway? Can somebody QRD me on what they do?
>>
>>993148
procedural modeling. infinite possibilities but infinite complexity.
>>
>>993187
that doesn't help me understand what you make with it
>>
File: H05EZb1897wVwhlX.webm (964 KB, 718x384)
964 KB
964 KB WEBM
>>993194
everything, you can make anything mesh-related with it. that's the scope geo nodes are. literally, go on Twitter, there are thousands of examples
>>
>>993090
Tutorials are the reason you're stuck in amateur hell. Stop watching random tutorials. Pick something you want to do or make and start taking steps to make it. Look up specific things as you go. No point in learning how to use xgen when you only do hard surface modeling and weapons for example.
>>
>>993194
Basically you can program your own mesh modifiers. Stuff that add or replaces faces and attributes. Like, want a tiled roof? You can write a modifier that'll lay tiles all over faces of a shape. Wanna make animu outlines but don't like the solidify or outline modifiers? You can write your own. Stuff like that. All the stuff that'd be done with vertex or compute shaders.

The price to pay is the way they're set up is super goddamn unintuitive compared to material nodes, like holy shit.
>>
>>993200
>>993215
Oh ok, I think I get it now. Thanks for explaining frens
>>
File: THE ANTICHRIST.png (232 KB, 694x662)
232 KB
232 KB PNG
>>993075
>look up how to do fucking anything in blender recently
>'it's easy bro, just use geo nodes!'
>said """easy""" task is the most convoluted spaghetti i've ever seen
AAHHHHHHH I HATE NODES I HATE NODES I HATE NODES I HATE NODES
>>
>>993200
Where is that webm from? Some of those actually look pretty useful.
>>
File: 21342.png (1.09 MB, 1003x909)
1.09 MB
1.09 MB PNG
On the plus side, geonodes is so overlycomplicated and hard to understand that once you actually become half way competent with them Houdini will seem like a cakewalk in comparison.
>>
Anyone here halfway good with math? I want to create a point(A) that moves between two other points(B and C repsectively).

Points B and C are given a value that can smoothly transition between 0 an 1, depending on the position of point A.

When A is closer to B, then B is the larger value. When A is closer to C, then C is the larger value.
When A and B share the same position, then B's value is full 1, and C is fully 0. When A shares the same position as C, then C is fully 1, and B is fully 0.

I suppose it's the same or similar to the concept of normalizing weights. I just don't understand how to do it exactly the way I need to. I can make multiple points normalize in a way. Where they all proportion themselves in a value of 1. But I can't figure out how to distribute the weight in such a way where 1 point can sort of "cancel out" all other points, so to speak.
>>
>>993400
https://x.com/ymt3d/status/1745316148378554531
>>
>>993075
excellent sugar work
>>
>>993453
you're describing a lerp function

a = b + (c - b) * t

where a, b and c are the point positions and t is normalised weight
>>
>>993469
So c-b gets the distance right? And then multiplying it by t scales it to the weight. And then adding b back places the points back where they're supposed to be. Do I understand that correctly? I'm a little lost. How does this assign values onto B and C?
>>
>>993473
i'm not a mathman, but a vector subtraction create a new vector.
so c - b creates a vector pointing from b to c.
the length of (c-b) is the distance between the two positions.
i've never really thought about the way the lerp function works tb h, houdini just has a built in one.
i think freya holmer has a video about it though.
>>
>>993473
>How does this assign values onto B and C?
i'm not sure what you mean here?
you can pull the query positions right / know them already?

if you want to solve for a different variable, just rejigger the equation.
also, those chatbots are good for basic stuff like this.
>>
File: cage test 01.webm (187 KB, 436x338)
187 KB
187 KB WEBM
>>993477
>>993478
Sorry if I'm explaining badly. Maybe a visual will help demonstrate what I'm attempting. I"m trying to get 1 point to sort of average itself between multiple points, in such a way that it can't escape it's "cage" so to speak.
So here, I took a regular cube, and erased the faces so the inside is visible. Then I made a separate object that's only a single vert and placed it inside of the cube-cage.
Using a bone, I can move the vert around.

Then, in geometry node, I averaged the position of the cube's points. And randomly rearranged the cubes points a bit. Keeping the irregularly shaped cube a convex shape.(I'm pretty sure making a concave shape would add complications I don't want to bother with)

Now, ideally, when moving the bone, the point inside of the cube should attempt to follow the bone with near perfect accuracy. Only stopping when it reaches the boundary of the cube. I don't expect the boundary to be exactly the same as the cube's faces. I'm guessing that averaging between multiple points would create a sort of convex shape that's pinned down by the vertices of the cube. I don't know, that's just a guess. But what I'm fairly certain on, is that if the weights are averaged properly, then if I attempt to move the caged vert into the corners of the cage, then it should sit entirely flush with the vert that defines the corner.

Does that make sense? As you can see in the webm, I'm part way there. The problem is that the points are all contributing to the average too much.
>>
>>993483
i see what you're trying to do but thinking about it makes my head hurt
i'd just check if i was inside the container geo (in houdini i would use a volume representation for this, in blender the solution seems to be raycasting) and if not, just find the nearest position on its surface and just stay there (blender seems to have a sample nearest node).

not the fastest solution, but it'll always work (+ i don't do vidya so i don't know how they'd handle it)
>>
>>993490
Yeah, I know I could do raytracing. But the raytrace node comes with its own issues. If I can get 1 point weighted properly, then I can do it with more points, and then the magic would really happen.

The crazy thing is, I think I figured out the weighting method I need months ago. But I never applied it. It only existed in my head. Now I'm trying to rediscover it.
>>
>>993483
Maybe find the nearest face, project onto that plane (easy with a dot product), and if the point is behind the plane (dot product also) then don't modify it. If the point is in front of the plane, snap the bone to the projected position.
>>
>>993483
nvm, I tried this >>993528 and it didn't work.
However, raycast does work perfectly.
You can simply constrain the vector from the box origin to the bone to a maximum length of ray_intersection_distance.
I tested it and it appears to work perfectly, both inside and outside the box
>>
File: cage test 02.webm (468 KB, 1562x802)
468 KB
468 KB WEBM
>>993540
No, you were onto something with the nearest idea. I got it to work.
This technically gets the results I asked for. But not quite in the way I need it to work. The weighting is important. I need the cage itself to have a sort of "field" to it. Where stray points inside of the cage are effected by the weight of the cage.
>>
>>993552
Interesting. When I tied Sample Nearest Face, it would sometimes pick the wrong face (like the top face instead of the side, since the edge is technically equally close) But I didn't use capture attribute, because idk how that works.
Maybe it does the same thing for you, but you fix it with the Geometry Proximity node.

>Where stray points inside of the cage are effected by the weight of the cage.
How should they be affected? Are there any points inside for which you know what projected values they should have?
it's easy to push them inward or outward, or use float curves to adjust the feel.
For example, define the 0-1 range where 0 is the center, and 1 is the plane intersection distance. Apply float curve of some kind, preserving the 0-1 range.
Then map it back to a position
>>
>>993555
update, I tried the geometry proximity node and it actually gives the closest point on the geometry to the sample point (as you were already aware.)
That's very convenient! Good to know.
>>
File: float_curves.png (25 KB, 554x345)
25 KB
25 KB PNG
>>993552
An example of the curve idea.
The curve on the left makes the points want to tend toward the perimeter of the shape, while the right curve makes them pull toward the center.
You might want to even increase the range of the curve, so that the bone has to be further outside the volume for the generated point to reach the perimeter
>>
>>993555
I've been doing nodes for like a 1 year now, and capturing attributes still doesn't really make sense to me. Actually, within the last week or so, I think the idea is only now starting to sink in. By capturing, you're basically telling blender to save your results up to that point. If you have two geometry lines, and only one of them has the capture on it, then only that 1 line will have the results saved. The other geometry line is blind to the results. So in order to make the other line see, you have sample the results off the first line, and then save(capture) the results on the second line.

I needed the vert to see some of the information from the cube. So I performed a couple transfers from the cube geometry line to the vert geometry line. But the cube consists of 6 faces, and the vert is only a single point. So how do you save 6 bits of information onto 1? Well you can't. The vert can only accept 1 face attribute at a time. But that's ok. We only need the vert to know the attribute of the face it's nearest to. So we sample the nearest face, and the save(capture) it's attribute.
>>
>>993483
Just do add points, pipe in location from oject info on ur armature and then limit it with a mesh boolean.

Also you can just go to visibility in an object's properties and make it a wireframe, no need to delete faces.
>>
File: 1703591832814890.png (152 KB, 616x442)
152 KB
152 KB PNG
i made dis
>>
>>993670
Looks nice. What are you making, something sci-fi?

>>993555
>How should they be affected? Are there any points inside for which you know what projected values they should have?
I don't know. I keep bouncing back and forth. I'm not sure exactly what I need. But right now, I'm thinking that the center of the cage should be "dense". Impassable. So calculate the center and give it a value of 0. And then the boundary of the cage is lighter, possibly more flexible. So it should have a value of zero. But it should also be impassable as well. The inner verts unable to escape the cage. Yet they can more more freely along the edges, and slowly as they compress toward the center.

I'm not sure float curve will help here. Or at least, I can't imagine how. I think I have a different idea, but it will require some stupid math and the accumulate field node, which I'm not even sure is possible. And I'll have to create a copy of the verts, unparrented from the bone. Because I need the position pre and post movement. Which is impossible as long as the verts are parented directly to the bone.

I wish Geometry nodes had at least a *basic* armature node. They don't have to give us a full suite of armature nodes. But even something as simple as knowing the positions of the bones in pose and in rest would unlock many possibilities. As it is now, you have to place empties or verts at bone locations to get that information, and it's just a pain in the ass.

>>993645
Took like 20 re-reads to understand what you meant. If that solution works, then it won't suit my needs. As it will delete points that escape the mesh. I don't want that.
>>
>>993711
why are you spending more time on this if ray intersection is working for you?
what's the use case that ^ isn't the right solution?
>>
File: mass squish.png (14 KB, 874x710)
14 KB
14 KB PNG
>>993715
I'm sorry, it's all very experimental. Which is why I'm holding back on explaining too much. Because I don't want to bog you all down with details to ideas that I'm not even sure will work.

I want to create something like mass. Even if it's kind of faked. I believe that in order to do it, points need to understand the mass they are a part of first. Creating something of a "field" capable of repelling stray invading points. But not in a strict binary way where you're either in or out. Rather in a soft density kind of way, where the mass can compress and give way to invading points.

Raycasting might help do the part where points detect their place inside of the cage. But I haven't actually tried it yet. Because I need every point to check every possible plane it can "see" while inside the cage. Which means casting 1 ray isn't quite enough.(or maybe it is, idk) I think I'll need to dynamically adjust how many rays are cast from a single point on the fly, with no upper limit. If a point can see 10,000 planes, then it needs to cast 10,000 rays to know the points of all the planes. That's not feasible with the raycast node, I'm pretty sure.

It's doable with a plane projection calculation. And then calculate if the projection is inside or outside of the planes. And then for all the points inside of the planes, get the attribute. However, that's a problem, because a projection calculation like that requires creating a shit ton of points, which slows everything down. There has to be a faster way.
>>
>>993717
Look into signed distance fields. MagicaCSG is a good example of what they can do.
>>
>>993717
>>993734
yeah i was going to suggest volume/sdf look ups again. the lookup is cheap and gets you distance from the surface for free (you can sample the volume's gradient for direction to the surface). the conversion to a volume representation will probably be the most expensive bit, but it can be pretty fast depending on voxel resolution.
FLIP and MPM sims use a volume+point method to do their thing.
blender support for that stuff i don't know about though.
>>
>>993734
Sign distance fields hurt my brain. I don't understand the math. Do you have any resources for noobs to learn? I really need this shit dumbed down as much as possible.
>>
>>993434
Wait a minute... is this a merchant?
>>
>>993899
oy vey, I see it too... meshuga
>>
>>993434
>Houdini
Is easier and better then Blender.
>>
File: 1236667.png (716 KB, 687x696)
716 KB
716 KB PNG
>>993903
>easier
Its easier in the sense that it has a built in interface for points and primitives that actually shows you the index numbers you are working with. The fact you can do the same with attributes and the like. Also the fact that there is a built in SOP for most things already built in and multiple ways to make your own outside of connecting node spagetti like supplementing with vex wrangles.
>better
Its better for proceduralism and simulation because every facet of geometry you are working with can be manipulated in ways that are next to impossible in a normal build of blender. I can technically manipulate the color attribute using Geometry nodes and might be able to use that to do something simple like displace some geometry but I cant do stuff like converting a float value into a viscosity attribute to drive a flip sim. Blender doesnt even have a stable dops equivalent to Houdini.
>>
Trying a different approach to get deformation. Well, it's an old approach I attempted before, but I'm a little more experienced now, so I can do more than before.
I did a little test for the shoulder region. Set up a collection of empties. Copied the collection, and parented the copies to bones. Brought the two collection into nodes. One collection is treated like a rest position. The other collection is treated like a pose position. The empties can provide location and rotation information.

I set it all up so that the mesh near the shoulder empty deforms the most. Gradually deforming less as the mesh gets near the empties that neighbor the shoulder empty. Essentially creating a perfectly smooth and normalized zone of influence around the shoulder. It's perfect to fault. As with so few empties, the zone of influence is able to reach farther than desired. But that's as expected. With more empties in placed in key zones, they all would support each other, and avoid unwanted influences.

Then I added up the rotation for all the empties, and rotated the mesh only inside the isolated zone of the shoulder. The effect is jank right now, because it's still very underdeveloped. But it's looking promising so far. The idea is to make every major deformation zone be normalized to their neighbors. And with clean efficient nodes. It took a lot to set up only the shoulder with 3 neighboring empties, so I have to develop a method that automatically sets up all the masking and rotations for every zone. I have ideas for how to do that, but I'm tired and calling it enough for 1 day.

Moving the shoulder empties around, changes where the zone of influence is. Which makes for a funny warping effect.
>>
>>993075
ah that's a fine-a pasta mamma mia
>>
Are there any good geonode alternatives to skin modifier? I'd like to avoid having intermediate loop cuts, to have configurable number of sides (skin always generates kinda cuboid stuff, what if I want 3 instead of 4, or maybe I want 8), also would be nice to have configurable topology at join points, maybe configurable bevel amount or something.
>>
File: SDF 01.webm (1.64 MB, 1458x516)
1.64 MB
1.64 MB WEBM
>>995092
>>993738
>>993734
I didn't look up how to do SDF, but I think I got something like it going by accident. I was just looking for a way to fix my shitty rotations, when I somehow managed to make a mask that was oddly smooth. Normally when you create values that spread across the mesh, you can see one point blending polygonally into another. But apparently, you can create smooth transitions across the mesh.
I followed that idea, and then I realized I essentially created a space that the mesh is reacting to. And I can overlap them. Of course the first thing I did was isolate a titty.
>>
File: Black hole titty.webm (195 KB, 600x550)
195 KB
195 KB WEBM
>>995356
I tried using the sdf in a simulation node. But I still don't know how to make anything worthwhile in sim nodes, so she ends up getting sucked into her tit like a black hole.
Actually, I got better results with other configurations, but nothing so great to show off.
>>
>>995321
Also interested.
Those unnecessary loops drive me up the fuckin wall. Like yeah, I can throw a decimate on there set to planar, but it really should be simple geo from the start. Skin is great for shit like beams, rafters, and other industrial bits, but it really screws the pooch in terms of polycount and skyrockets way to fuckin quick.
>>
>>995092
You should keep that saved in another file for later, suppose later you could design some space monster that breaks reality and the deformation seen here reflects that. Could be a pretty cool effect.
>>
File: Boob Inflate.webm (1.34 MB, 720x540)
1.34 MB
1.34 MB WEBM
>>995357
I accidentally created inflation. I've been trying to figure out how to make the mesh conform to the "mass" of the field. I did kind of get it. I can make it so that when gravity is applied, the mesh sort of drapes over an invisible sphere. But the parts of the mesh that don't drape over the spear continue to fall, and it looks like a thick dripping fluid. Like melting cheese.

I need a way to stop edges from losing shape. Unfortunately, edge correction is hard. I figured I could cheat by just scaling out from the center of the field. And it created this inflation effect. The mesh is pushing out to the boundary of the field. Which, isn't perfectly spherical, because I scaled it to an egg shape to better alight with the shape of the breast. The scaler is even nullifying the effect of gravity.

Needless to say, this isn't what I was going for. But I think it's neat to be able to visualize the boundary of the field.

>>995507
There are probably simpler ways to do such abstractions. That node group is very cluttered, because I'm trying to find ways to control the rotations in a very precise way. But if all I wanted was to make stuff bend all abstractly, then I'd make something a lot simpler.
>>
>>993075
I've made a basic soft boolean, idk why the intersecting edges are not exposed in the boolean modifier and you have to do all this shit.
https://gofile.io/d/vbadU9
>>
File: 324582.png (364 KB, 1801x911)
364 KB
364 KB PNG
>>995321
>>995433
I got you covered. If you need more resolution just do a resample before.
>>
>>996058
Sheeeeit, so simple I'm honestly amazed I never bothered.
Like I've done shit way more complicated than that. Granted, I haven't actually run up against the issue I mentioned with the skin modifier since I actually learned GN.

How well does it work with multiple, connected/unconnected edges? We ain't just usin the skin modifier to make single curves now. I use it for scaffolding and railings and stuff.
>>
>>996055
im not downloading that, just send a screenshot
>>
>>996102
I can't upload imgs here because my new isp is blocked, it is dynamic or something, anyhow it only works with simple geometry that doesn't fuck with the bevel modifier.
https://imgur.com/a/1gLFT0D
>>
>>996108
upload the image to https://catbox.moe/
>>
>>996109
Sure, whaterver.
https://files.catbox.moe/o1al2n.png
https://files.catbox.moe/2vjzmw.png
>>
>>996102
I forgot to add that you have to create a vertex group with the same name that you put in the node tree to use it in the bevel modifier.
>>
File: Ark Shadow Test 01.webm (682 KB, 720x720)
682 KB
682 KB WEBM
Trying to create a cast shadow using nodes. No lights. Instead, placing an empty in the scene, and then the geometry reacts to the empty in such a way that the empty is essentially a spot light.

The first part is easy. Just subtract the location of the empty from the position of the mesh. Then, get the normals of the mesh. Then combine them into a dot product. That will "light up" the parts of the mesh that are facing the empty. Very easy light and shadow. The problem is that there is no casting shadows. That takes more effort to pull off.

My poor solution, was to cast rays from the empty, aiming toward the mesh. The rays that return to their points interrupted are assigned 1. The rays that return a shorter distance than their points are labelled zero. 1 points are light, 0 points are dark. It creates a cast shadow. Combined with the light made from the dot product, and you essentially have a working light.

The problem is that casting rays from only the points creates the jaggedness you see in the webm. I don't know how to make it smooth. I tried a number of things, but nothing works. The most likely to work is probably a technique called "shadow volume". It's basically where you extrude the along the angle of the light. and the extruded geometry that intersects with the mesh will outline where the casting shadows zones are. I can do the extrusion. But I don't know what to do after that. Like, how do I imprint the silhouette of the extruded mesh onto the regular mesh? It must have something to do with textures and UVs and shaders. But what? I'm at a loss.
>>
>>996199
You plan on doing the final renders in blender eevee? Personally, what I would do is use the existing lighting tools, pipe a Diffuse Node into a shader to color Node, then pipe that color into a color ramp to get the shades you want... What you're asking to do would requires a fair bit of fucking around, since you'd need to do what you're doing with every pixel (fragment) of your mesh that is displayed, where you're currently doing that with every face.
>>
>>996199
Pretty sure those are called shadow terminator artifacts, though I don' know if searching for them will do you any good except for people having trouble with them or Blender bragging about getting rid of them.
Why are you doing this with GN though? Shaders might work better, and you could do something pretty similar by using a straight up Normal node and changing the vector with drivers via an empty. Not to mention you get the Shader to RGB node as well.
>>
File: Ark Shadow Test 02.webm (704 KB, 720x720)
704 KB
704 KB WEBM
>>996203
Yes, Eevee. I don't want to use the existing lighting tools, because the shadows kind of suck. I'm sure you've noticed that. Cast shadows and contact shadows are balls in blender. Plus, you can't get toon shading and colored light at the same time. Because you can't tell the diffuse node to only work with a single light. The diffuse node works with all lights, all the time. Makes mixing colored lights impossible for toon shading. Pic related is impossible with normal lights. Doable by making your own lights.

>need to do what you're doing with every pixel (fragment) of your mesh
That's the idea. I'm reading around, and it seems I need to figure out how to cast rays per pixel. Which I suspect may be possible to do in shader nodes. Some of the nodes there work per pixel... I think. Some nodes are described affecting the "shading point". Which is a term that isn't explained in the manual, but I *think* it's describing individual pixels.
The Camera Data node has a Z Depth function that is explicitly described as per pixel.

The problem is that I don't know how to manipulate per pixel.

>>996226
I haven't learned drivers. I know they can do a lot of functions. But I don't really understand how they work. They feel like magic. You type random things into boxes, and then *boingo* suddenly the data is different? What? Makes no sense. I'd be willing to learn if I saw that using a driver was capable of doing something I couldn't do with nodes + if I desperately wanted to do it. Like if drivers could somehow produce per-pixel rays that shoot at the empty, that would be nice. But for just working the normals? I can set normals up with nodes very easily.

I actually do have the dot product part set up in both geo and shader nodes. But geo nodes has the raytrace node, which allows me to shoot rays. So I need that anyway, unless there is a way to manipulate rays in shader nodes in a similar way.
Geo nodes also makes the extrusion part of the process possible.
>>
>>996235
I'm not sure there's a way to have enough control to do per-pixel raycasts from shaders... You're welcome to try and report back, though.
What I would do:
>Cast shadows and contact shadows are balls in blender.
Increase shadow quality settings and only use lights which produce sharp cast shadows to improve this. It'll never be perfect unless you use cycles, but it can be made workable.
>Plus, you can't get toon shading and colored light at the same time. Because you can't tell the diffuse node to only work with a single light. The diffuse node works with all lights, all the time.
You can, by using render passes and compositing. It's a bit unintuitive to initially setup in blender, but once you put it together it's a valid workflow.
>>
>>996239
Compositing eh? Perhaps. I guess I could try that. But that doesn't directly help in what I'm attempting to create. It's a "good enough" solution. Not a real solution. If I can do what I want using either geometry nodes or shader nodes, then I can manipulate lights in real time, and not have to render every time I make a minor change, just to see what that change looks like.
And if I could somehow manipulate the per pixel rays to see from the location of the empties, then I could draw perfect shadows. And then the lighting system will actually work, and not half work.

I have the general idea in my mind now. Because I found a video of a guy who created his own render engine inside of geometry node. He's an absolute mad man. Here's a timestamp to when he starts to set up the cast shadows: https://youtu.be/FqQYNdQLUDA?t=2751
But you should at least skim through part 1 and 2 in order to understand what he's doing.
I'll just tell you: He's created a grid with the dimensions of your typical computer screen. 1920x1080. Then he fires rays from a focal point just behind the grid, toward the objects of the scene. He gets the hit, and transfers the data to the corresponding block in the grid. The blocks are effectively pixels, and the grid is effectively a screen. Doing a bunch of math, he can capture shape and depth and it renders the scene onto the grid.

For cast shadows, he just takes the first ray hit, and then does another raycast toward the light. Any ray that's obstructed by an object before reaching the light toggles the block grid off. Which makes pixel perfect shadows for his grid. It's really that easy. If only I could cast a ray from the view to the surface, and then cast a second ray from the surface to the light, then I could make pixel perfect shadows. The possibility it right there within my reach. I can feel it.
>>
>>996241
Oh, yeah this should work. Geonodes operate on geometry, so making the geometry the pixels seems like an obvious solution, in retrospect.
>>
>>996242
>Oh, yeah this should work.
How though?! I don't want to create my own renderer. I just want to do the shadow trick and move on with my life.
>>
>>996244
Well... If you want to forcefuck per-pixel computations into geonodes you gotta do pretty much exactly what you've laid out in the post right above. Can't really skip any of the steps.
>make grid the size of your render
>cast from them away from the camera
>if you hit a mesh, cast from the hit toward the light
>if you hit a mesh, shade as shadowed
>if you reach the light, shade as lit
Desu I'm not sure pixel perfect shadows are worth all that compared to conventional methods, but it would work.
>>
>>996246
This is bullshit, man. Even if I did all that with the grid, how would I make the results mesh with the rest of the scene? I would have to put the grid directly in front of the camera. The grid will get colored, and block the camera. I suppose I would have to deleted all of the pixels that aren't shaded, in order to prevent them from obstructing view. I dunno.... doesn't sound quite right. And not as convenient as I was hoping for.
>>
>>996248
well, yeah. No one said near-realtime pixel perfect shadows were gonna be easy, there's a reason we've come up with all this bullshit to do them kek.
I think the idea would be that the grid IS the view, so that it's not obstructing anything, it's actually what you want to see.
If you don't wanna fuck with that, you would probably have a nicer time trying to do it with custom rendering in Unity or Godot.
If you wanna stick to blender... Maybe you could try to write your own renderer and hijack the EEVEE raycasting in 4.2 for your own purposes. No idea how to go about that, but it's possible.
>>
>>996249
It could be easy. If only there was a way to bounce the pixel rays. Just one bounce. That's all I need. That's not asking for a lot, right?

Did they patch 4.2 to not have shit performance?
>>
File: Empty Based Light.webm (994 KB, 1200x1200)
994 KB
994 KB WEBM
>>996235
>But I don't really understand how they work. They feel like magic. You type random things into boxes, and then *boingo* suddenly the data is different?
I mean yeah you can do it like that, but I think most people now just right click values, and copy/paste them as drivers.
At its most basic level, you're just linking thing to another. Think of it like parenting, but for values.
Like say you want to link the z rotation of an object (or any object) to the X location of another, so when you rotate the first object the other object moves by the same amount. You just right click the Z rotation of the object, copy it as a driver, and paste it into the X location of the other. Ez pz driver. If you want to get fancy then you can introduce math, or just go to the driver curves and adjust the values there to get things how you want it.

As for your light based empty, have you tried something like this? The empty just uses a dot product with the normals to color different parts of the mesh. Here I'm using black and white, but you can do whatever with it.
>>
>>996251
>Did they patch 4.2 to not have shit performance?
It seems like there's a bug giving some people shit performance, but it works pretty good on my machine (tm).
>>
>>996257
>>996235
I forgot to mention that the rotation and scale of the empty can change the light as well, not just the location.
>>
>>996257
NTA, but that's the easy part, the issue is he wants cast shadows too.
>>
>>996260
Ahhh, I've fundamentally misinterpreted the issue then.
I think I've done something like that in the past though, so lemme see if I can dig it up.
>>
>>996257
Texture coordinates?! I wasn't expecting that. But yeah, like the other anon said, that's the easy part. cast shadows is what I need. Though your method of doing the dot product uses 1 less node. So I'll probably copy it from now on, in order to save space.

>>996258
My machine is a normal desktop PC with windows 11, AMD Ryzen 7 5700, RTX3070, and 16GB of some kind of ram, I'm not sure. So basically, it's a half decent PC using popular parts from regular brands. The fact that 4.2's performance goes to shit on my machine, I can hardly think of as a "bug". More like they didn't even fucking test the thing on any hardware but their own. then they released it with their fingers crossed, praying it wouldn't fuck up, which it did.
>>
File: Ark Shadow Test 03.webm (591 KB, 720x720)
591 KB
591 KB WEBM
The extrusion method, or the "shadow volume" method, looks the most promising. You can see how the extrusion creates cast shadows on the body in appropriate places. If I could somehow grab that as a mask, I might have something. Or perhaps get the difference between the depth of the extrusion and the depth of normal mesh, and then the less than 1 distances will create the mask. That might not work either. Because the ray has to count 1 for the front of the extrusion, and then 1 again for the back of the extrusion. In order to determine if it's inside or outside.
>>
>>996076
>How well does it work with multiple, connected/unconnected edges?
It works fine unconnected but starts to break when connected.
>>
>>996292
Then yeah, not quite a 1:1 replacement yet. Should be interesting to see how the OG modifier works when the devs eventually convert all the modifiers to GN. Something I'm both curious and extremely scared to see. They're gonna fuck up a lot of shit with half-assed implementations. Just like bloom.
>>
Did a quick experiment to learn how geometry nodes and shader nodes handle proximity differently. The position of the mesh is getting compared to the position of an arbitrary point in space. And then a distance threshold is established. In this case: It's the value of 1.7. I chose that number arbitrarily too. Putting the distance into the threshold, will make every point inside of the threahold a value less than 1, and everything outside of the threshold a value greater than 1.

As you may already know, blender automatically blends the values from point to point. So if 1 point has a value of 1, and another point has a value of 0, then it creates a gradient between the two points. Normally, these gradients look squarish, as they follow the edges of the planes. However, you can manipulate this by pushing 0 down further into the negatives. And 1 up farther beyond 1. When you do that, the gradients along the edges shift as either black or white dominate the other. With precise control, you can make the gradients appear circular.

However, this does require dense enough geometry to create the angles of the circle. If you try it on a 1x1 plane, it will still appear blocky. But with a 4x4 plane, it begins to fool they eye. If you scale the mesh to the threshold, you can get more accurate interpolations. I did last before, but I forgot how already. The interpolating I mean. I remember how to scale. But not what comes after that.
>>
File: Shader interpolation.webm (672 KB, 784x724)
672 KB
672 KB WEBM
>>996489
However. Shader nodes works differently. It's not interpolating between points of the mesh. It's doing something that I don't quite understand. But it's a lot more accurate. Making a perfect circle. It doesn't matter how much or how little geometry you have on the plane even with 1x1, it still makes the perfect circle.
>>
>>996489
>>996490
that's basically the shader working on a per-pixel basis and the geonode working with verts (AKA geometry).
>>
File: 001.jpg (1.35 MB, 2904x1652)
1.35 MB
1.35 MB JPG
Yo yo yo dudes, I'm trying to make some power lines, but I'm having trouble. I got the poles to gen, and tilt, and I even have wires going between them that match the tilt. Problem is, the wires don't follow a smooth curve and just come to sharp points.
I'm not really sure how to fix it. Any help would be appreciated.

If someone wants to take a look at it, here's a catbox link. The node graph is a bit to large and wild to use a screenshot, and to expect someone to re-create it from a screenshot.
https://files.catbox.moe/iuk6gk.blend
It'd be super nice to have a proper mathematical catenary curve between em, but honestly I'd take any smooth curve at this point.
Thanks in advance,
>>
File: power poles.png (233 KB, 938x491)
233 KB
233 KB PNG
>>996497
I made an attempt. My math is shit, so it's probably incorrect. I didn't even bother learning what a catenary curve is. I wanted to fix how the lines are aligned. But that seemed like a daunting task. So after fiddling with some nodes, I decided to let them be and focus on the drooping issue.
https://files.catbox.moe/9kc7n5.blend

Adjust the node labelled "odd" to make the lines more or less smooth.(Keeping the integer odd, cuts the cable in half.)
Adjust the node labelled "droop" to make the lines more taut or slack.
I deleted the map range and random node. Because I wasn't sure what they did, and they were in my way. No special reason.
You could probably improve on this. I did shoddy work. But it's curves now, so eh.
>>
>>996515
Thanks bro. Works pretty well enough for my needs.
The alignment issue was an easy fix, just nudge the curves forward a bit, since the beam they align to is a bit forward of the origin point of the pole. Just had to add a few nudges to the Y value of the first Set Position nodes in the "Wire Create" section for the curves.
As for the random value and map range, they were my attempt at adding a bit of random tension to the wires, so they'd droop at a common variable, but some might be more/less droopy than others. Didn't quite work as intended and just added noise to em. I thought maybe I could add that randomness to the droop factor in yours but it seems like it did the same thing.
Honestly it's a really small thing that doesn't matter, so it's all good in the hood.

Could you explain a bit about why what you did worked? It doesn't look too different to what I was trying to do, though it seems like you turn yours into a mesh before drooping it. Which I guess that's the important bit.
No worries if you don't feel like it. Still, that was a big help, keep an eye out in the wip thread for it.
>>
>>996560
Well first of all, you didn't subdivide in the file you uploaded. You had the subdivide node set to 1 cut. Which is not enough cuts to work with. I upped the subdivision. That made it look something like a droop, but it looked like a bad droop. So I had to redo the math to make it look smooth.

In order to do the curve calculation, I had to know the distance between each two anchor points along the wire. I didn't know how to easily do that, so I had to use the shortest path node to get the same kind of information. I didn't know how/if the shortest path node works with curves, so I had to convert the curve to a mesh in order to use it. So converting to a mesh was merely a necessity for use of the shortest path node. Not mandatory if you can think of another way to get distances. Like say for example, creating points where the wires should be anchored, and then instancing wires onto the points. So you have individual wires between each pole, rather than one long wire. That way, you could sample the length of each individual wire.

But anyway, what shortest path node does, is if attempts to reach a destination in the shortest possible moves along an edge. And it will count up how many moves it made to get their, by what they call "weights". Since you already defined what the anchor points are by using their proximity to the pole, I just used that to define where the end points were, and then the shortest path node drew a weighted path towards each end. The points farthest away from the anchors will end up with the highest weights. That just so happens to be the points in the middle of two anchors. And then the points gradually decline in weight going from the middle to the anchor. That's why it's important to make the subdivision odd. Because a cut at exactly the halfway mark, will make sure the weights from middle to end are perfectly even.

Finally, using the gradual weights, you can do the wacky angular math to make a curve.
>>
>>996565
Ahhh, I think I get what you're saying. At least, the way you've explained it makes sense to me. I doubt that I could have figured that out on my own though. Up until now I've mostly been fucking around with general stuff like set position and the like, shortest path wasn't even on my radar, let alone knowing where it would even apply.
As for the subdivision, I had it turned up, but I turned it down since it didn't seem like it was doing any good, but I didn't quite want to get rid of it just in case I could figure something out with it.
Thanks again broski.
>>
>>996566
Yeah, the noding experience is like that. Where you didn't know a function exists, until you need it, and then after some trial and error, you find it was there all along. I still don't know what half of the curve nodes do.

Shortest path is a powerful node once you learn how it works. It's pretty much made for manipulating many mesh lines at once. Most tutorials will teach you how to make fractals with it. But I've no interest in fractals. You might benefit from learning fractals though, since you're doing an out door scene. Might be able to create nicely randomized trees by manipulating the values of fractals randomly.

Maybe this tutorial will help.(I don't know)
https://www.youtube.com/watch?app=desktop&v=nXCSMO4iioA
>>
File: Animate along curve.jpg (128 KB, 1260x570)
128 KB
128 KB JPG
>>996570
>Where you didn't know a function exists, until you need it, and then after some trial and error, you find it was there all along.
Sounds like me and the map range node. For years I had a node group I made called "color value" that took a mask factor and mixed it between two numerical values. I used it all the time. Then I find out later on accident that the Map range node both exists, and does the same thing but better (the different interpolation methods were a fuckin godsend).
Needless to say, map range is probably among my favorite nodes by now.

Still, thanks for the vid. I'll give it a go. I mainly use an external tree program for foliage (Speed Tree), but any info is better than none, and there's things that fractals are useful for.

I feel bad for asking, as I feel like I've asked a bit too much, but is there a way to animate the poles along the length of the curve? I've given it a few attempts from solutions I've done in the past (pic related is something I've referenced before for things), but can't seem to figure out how to do it with this one. I've had "some" success in some of the attempts, and I can get the poles to move along the curve, but it either fucks up the cables or does some weird shit where it wraps the cables back to the start.
I'm trying to get them to loop.
>>
File: Ark Shadow Test 04.png (327 KB, 827x558)
327 KB
327 KB PNG
>>996278
>>996257
>>996246
Did the grid method. Didn't follow the video. Just did my own jank version. I already had a node group that scales points in the fashion of a camera. So it wasn't hard to get the rays to shoot in the correct direction. though my camera scaling node doesn't fit mathematically perfect to the actual camera, I can still just manually type in a number that gets it as close as I can eyeball it.

Anyway the important part is that I can make the cast shadow very easily when I can shoot the rays in the correct direction. It's actually so easy once you grasp the concept. But the grid slows the animation down by a lot. This isn't feasible for real time viewport activity. And it's tied to the camera, because geometry nodes doesn't have any concept of the viewport view's location.

What I need to make this work fast, is the ability raycast in shader nodes. Not in geo nodes. Then I wouldn't have to set up this stupid slow grid.
>>
>>996782
>because geometry nodes doesn't have any concept of the viewport view's location
I think you could hack this by attaching some object to the camera that you then use as a geonode input. But yeah, this is gonna be super fucking slow.
>>
>>996784
>attaching some object to the camera that you then use as a geonode input
Why even do that?
There's an "Active Camera" node. It automatically knows which camera you're using. Just grab that, plug it into an object info node and you're good.
>>
>>996784
>>996799
I don't think you guys understand. Knowing the viewport view's location and knowing the camera's position is 2 entirely different things. I don't want to have to go to the active camera every time I need to check if the shading is correct. And have to manipulate the camera to check different angles. It's bothersome
I want to see the effects of my work directly from any angle instantaneously in the standard viewport view.

Which would be totally possible, if one could raycast pixels in such a way that detects hits on positional geometry. That, as of now, seems impossible. Which sucks. I've been trying to think of different ways to detect "hits" in shading, but no good ideas come to mind. The texture coordinate node has a "camera" input. Connecting a length math node to that, gives you a per-pixel distance between the viewport and the geometry. That's a start. Somehow, Taking the camera data and and location of the empty, I can get a per pixel distance between the geometry and the empty. This is great. You add those together, and you have the bouncing distance from camera to geometry to light.

Now what I need, is to get the truncated distance from the camera data to the empty that results from hitting the geometry. Just that 1 thing, and cast shadows are cracked wide open. But how to get that without a raycast function? Perhaps there's a manipulation using depth? Maybe detect which pixels are on the same level of depth in relation to the empty, as viewed from the camera, and then assign the nearest pixels white, leaving anything not the nearest black. I don't know. Thinking about this makes me delirious.
>>
>>996809
Desu I'm not sure how you do this without getting your hands dirty with some of the rendering backend. Either by reimplementing your geonode setup for the viewport, or by creating an arbitrary raycast shader node, or by fucking with whatever the eevee raycasting feature on 4.2 does.
>>
>>996813
Yeah, that's what I'm thinking. I'll have to do something on the backend. Which I have absolutely zero experience with. I can't code. No python experience whatsoever. And 4.2's performance is unusable for me.(at least, the last time I checked a month or two ago)

I'm frustratingly close to something amazing, but out of options. Feels blueballing as fuck.
>>
>>993075
ive never been able to get geometry nodes to work outside of blender even when texture baking it never bakes how it looks on the nodes, why is this?
>>
File: Ark Eye Test 01.webm (1.22 MB, 1152x914)
1.22 MB
1.22 MB WEBM
I hope you guys don't mind me using this thread for shader stuff. I'll go to another thread if it's a problem. But while this is primarily shader node work, I do use geometry nodes to store a bunch of data. Two empties are set up as eyeballs. And the empties track a third empty that's controlled by the armature. The geometry node captures the positions and rotation of the empties.

I'm working on completely procedural anime eyes. I did something like this before, but the previous method required making UV maps for each eye. This time, no UV mapping is required. It's almost as if I cut a hole into the eyeballs, except with shaders. And use that hole as a mask.
I figured out how to scale the hole to be bigger or larger. How to changes the shape from circular to oval. And how to rotate it, so it tracks along with the character's pose. And then by creating masks within masks, I can color in the eye with a pupil and highlight.

Because It's all made from the ground up, I essentially had to recreate the mapping functions that the mapping node would normally. I'm pretty proud of this. It's pretty tough to work out how to do this. But so far it appears to render faster than the previous method. Require fewer empties than the previous method. Require less geometry node math than the previous method. And isn't reliant on UV unwrapping to stay even. So far, so good.

I still need to figure out how to restrict the eyes, so when they're tracking to the far sides, they don't roll all the way to whites.
>>
File: roller coaster.jpg (325 KB, 2063x908)
325 KB
325 KB JPG
How do I get an array of a specific number of objects to follow a curve? I'm trying to make a roller coaster train.
I want an array of like 6 objects that follow a curve along the length.
I can't find anything about it. Every fucking video/question just resamples the curve to a specific number of equidistant points to distribute things along the entire length. It's like, yeah I KNOW how to do that, it's fuckin simple. The fact that I cant seem to find what I want probably means it's not so simple... or I'm retarded. Probably both.
I tried doing pic related, but it doesn't work. The first car follows the curve and rotation, but the rest are off the curve and copy the rotation of the first one.

I'd use the regular ol array modifier+ curve modifier, but I don't want/need them to deform, and they're collections which kind of puts that out of the question.
>>
>>997207
>Every fucking video/question just resamples the curve to a specific number of equidistant points to distribute things along the entire length.
Sounds exactly like what resample curve does?
>>
File: Coaster Test.webm (979 KB, 1562x872)
979 KB
979 KB WEBM
>>997203
For the matter of only 1 car following the curve: it's most likely because you didn't tick the "separate children" box in the collection node. And remember to apply any transformations you've made to your curve. Toggle between "original/relative" too. You might have to flip between one or the other depending on where object origins are, and all that jazz.

Anyway, I attempted a solution. I'm a noob at this, so take it with a grain of salt. But basically, the accumulate field node, allows you sort of "stack" things, by adding up a value per instance the cars stack in front of each other. And then I added a value to the entire stack in order to move the whole stack along the length of the curve.

The modulo node ensures that no matter how high the value of the stacked instances get, they will stop at the length of the curve and loop back to zero. Effectively creating a loop.

If your cars are different sizes, then you'll likely have to do some extra bullshit to space them correctly. My solution assumes they're all the same size.

I encountered a weird behavior where the curve was twisting when it looped. Don't know why. But I consulted with this tutorial, and it fixed the problem. https://www.youtube.com/watch?v=IQdT5kACMQc I couldn't find a solution on the exchange or the artist community forum. But that's why I used two align euler to vector nodes. Some mathematical magic eliminated the twisting artifact.

File: https://files.catbox.moe/iy7lhj.blend
>>
>>997220
Ahhhh. I always forget about the accumulate field node.
So yeah, this kinda works, but not quite the way I had in mind. The collection I'm instancing is a single car (multiple separate objects that make up the car), not a collection of multiple cars. Re-reading my original question, I guess I missed that bit. So that's on me.
Is there something simple I can do to instance that car collection to make up the train? That's been the biggest hurdle.
>>
>>997220
>>997283
Also, how is it that the car is instanced without actually using an instance on points node? I thought you HAD to do that to have something like that show up.
>>
File: Coaster Test 2.webm (508 KB, 1556x860)
508 KB
508 KB WEBM
>>997283
>>997284
You can set instances anywhere you want using the set position node. Same as you would a mesh. You don't have to use points to put them places. What I did, was sample a position along the length of the curve using the Sample Curve node. The node scans along the length of the curve, and figures out what the XYZ coordinates are. And then you just tell the instance to go to that position using the Translate Instances node.

An instance uses the objects origin point to place itself. So if you want to use the same instance multiple times, then yeah, you would create multiple points, and then put the instance on the points. So that's what I'm going to do. I'll place points onto the curve in a similar way that I put the cars on before. Changing the accumulate node from instances to points. And using the basic points node to count up the points. You said you wanted 6 cars earlier, so I set it to 6. It appears the rotation math still works, thank god.

And there. A single car instanced 6 times.
I barely changed anything, but here's the file anyway: https://files.catbox.moe/pu7x3k.blend



[Advertise on 4chan]

Delete Post: [File Only] Style:
[Disable Mobile View / Use Desktop Site]

[Enable Mobile View / Use Mobile Site]

All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.