Correct way of setting up mesh collision on generated mesh?

Hi I’m generating a cylinder and would like to use the mesh itself as the collision. I plan to generate it with different segments 3-32 - so with 3 it will look more like an extruded triangle than a cylinder.

I’ve seen in the example adding a box collider - though I want the simulation to take in to account the actual shape of the “cylinder” rather than just using a capsule collider.

Related I would like to be able to turn on a physics simulation so it could fall away after it has taken enough damage.

2 Likes

At the moment collision meshes aren’t supported, but they are definitely something that could (assuming they can be created at runtime). So at the moment this will have to be considered a feature request. You might be able to script something to take the generated proc-mesh and turn it into a collision mesh, there is a ‘has finished building’ event you can hook.

1 Like

Setting up collision for generated meshes is definitely something that should be on the wishlist.

Unreal has a setting for “Use Complex Mesh as Simple” if the mesh has no collision setup. I wonder if there is a pathway to get the meshes placed to be set to use that setting.

If not, placing assets from the Resource List is probably the best way forward (so actors and blueprints can be set up individually with their proper behaviours, physics, etc).

I am using the Geometry for quick prototypes (with no collision), because it’s easy to wire

Frame > FCube > Out rather than Resolving, adding references to the Resource Files etc.

I’ve pondered this before, and wondered the best mechanism for tagging generated meshes for collision. In general some operator ‘tag’ a mesh as belonging to some collision channel is fine, but there are some questions this raises:

  1. Could this just be like assigning a material: Assign a “collision channel” resource to your mesh and it gets used for collision instead of rendering?
  2. Would you want to do this as well as assigning an actual material so that you could generate one mesh and use it as both visual and collision.
  3. If you couldn’t do both (i.e. the “collision material” idea) then for cases where they matched you’d need to generate a mesh twice, is this bad? (You would however get the ability to generate different collision to visuals, e.g. simpler collision).

So, to piggyback the material system (already done), or to have a separate ‘channel’ (more work/vertex data)?

There’s a lot to unpack on those questions. Let’s go.

Static Meshes in Unreal have the Collision Data saved into them and it’s processed at import time.
If an FBX Imported Static Mesh doesn’t have a user setup object prefixed with UBX_, UCP_, USP_, UCX_ (Box, Capsule, Sphere, Convex, respectively) mesh with the same name as each visual mesh (e.g. Cube_001 & UBX_Cube_001). It can be generated automatically by Unreal at Import time based on the Complex Static Mesh.

This will generate a simplified Convex.

more detailed info here:

This usually doesn’t give enough detail for most situations so you can either generate your own mesh in your DCC or use the more expensive scenario which is setting

image

This will certainly be super complex and will work well for projectiles and finer simulation but it’ll be more expensive.

I suppose the Generated mesh Operator (like FCube) could have an option to add one of the types of collision to the array based on the entities bounding box (Box, Sphere, Capsule or Convex) depending on the type of generated mesh would be good. Convex being used for more complex primitives (Such as cylinders).

However, when procedurally generating complex meshes (like trees for instance) it would be nice if to have a Object Collision Operator, that could replace the Convex Object for a generated one. That way you could procedurally generate a simpler collision for your mesh following your own ruleset.

About Materials

I find the assignment of material using the Vertex Color a little strange, because you end up applying it to the whole object. Unless I’m missing something, you can’t paint individual vertices. which is the point of vertex colors.

The Vertex Colors are normally used to blend between materials, or to spawn meshes, foliage or actors on the surface of other actors based on the vertex weight, or all sorts of uses.

I also found strange that you could “Paint” a Generated mesh, but not assign a material to a placed one.

What would make more sense (in my head at least) would be to assign a Material to any object (not just Generated Meshes), So the resolve node would work exactly the same.

but the Material Node would take the ID input, and Push the Resolved Material to a Material Slot

image

Objects can have any number (don’t know what the limit is) of Material Slots
Most meshes only have slot 0.

So to answer your questions.
1 - Assigning a collision channel instead of rendering would be a bit weird (Generated meshes should have the option to have collision or not)
2 - Having Material Or collision would be very weird, as a user, you can either paint or collide doesn’t sound right. You need both.
3 - I don’t think Generating a Mesh twice is necessarily bad. I don’t know about runtime performance implications, but it could be as simple as taking the generated object output and passing it into an Add Collision node, which would work really well for offline generated content.

Thankyou for the comprehensive response. So…
Procedurally, you can (currently):

  1. Generate a multi-mesh (supports multiple materials arbitrarily across generated triangles).
  2. Place static meshes (and blueprinted actors).
  3. Instance components (experimental, currently being improved as we speak).

You can use 2 to place meshes that may already have collision, you can use 3 to add collision primitives (box/sphere/etc), but 1 doesn’t do anything other than visual geometry at the moment. The question is; how to support generation of ‘complex collision’ meshes for an actor?

The reason I talked about using materials to flag triangles as contributing to a collision mesh is:

  1. This is a system already in place that could easily be extended to flag collision geometry.
  2. Your generated geometry is likely to be a mix of surfaces you do and don’t want used as collision (e.g. fine detail would stay as visual only) and so within one mesh you need to flag triangles as collision or not (and potentially assigned to particular channels).

Now, I could add new operators to pass geometry through that marks it as collision (and assigns channel/channels), but this is almost equivalent to the material assignment. So I was asking if the need to assign both at once is useful or problematic.
I could also have some way to flag extra content outputs to your top-level procedures for collision, but passing around multiple ‘output content’ data paths (visual vs collision) just adds to the complexity of your procedures.

So, I’m just exploring the ways this could be done that work well with the current systems and don’t require adding more complexity/cases that need special handling. If ultimately it’s better to do so, then sure, but I’m not sure there is that much incentive to.

This also all depends on what technically Unreal needs and can use for collision. The UProceduralMeshComponent does have some collision support that I need to study. It may be that arbitrary meshes aren’t great as the physics solver tends to only support convex hulls (but multiple combined hulls are ok I think). The mesh component looks like you can just give it a mesh and it will build collision in the background if the hit is acceptable, so that’s an option too.

I setup a simple collision test and to get it to work I had to set the Box Extents of the Collision component to 1 in my Resource List

And then Half the size of the Frame before placing it, to have it match the Generated Cube

image

Forgot to actually ask the question.

1 - I’m not sure if that’s how you are supposed to create collision (from components)
2 - I don’t quite get why I had to halve the scale of the frame before placing the collision.
3 - I also am not clear how the Parameter Map works to change parameters when the object is spawned.

The example project has this set up for box collision (all the stage pieces).

  1. Component resources are added to the Actor as part of the procedural generation. The Frame is an implied parameter 0 and you can pass in extra parameters as a list with the Parameters input of the Place operator.
  2. Scaling is needed because the ‘extent’ part of the box is from centre to edge (like radius vs diameter).
    2b. You can set a scale factor in the parameter map when calling a function (see example scene)
  3. I’m rebuilding the Component setup UI as we speak and it will be much improved (clearer) soon. At the moment it’s best to have a look through the example scene and the resource lists in there. I use this for placing text renderers, collision boxes, and spotlights. It’s actually turning out to be really powerful.

Oh! For sure, I can see huge benefits there. I make components for all sorts of things on my project that I can be added to Actors. I have components that them interactive (such as pickups, interactive lights, etc).

And, Thank you for the reminder, about the examples, I keep forgetting to look at them to see how they’re done.

1 Like

apparance-ball-and-peg
So I made some changes to the ApparanceEntity source files to get this working. Couple of extra properties used to branch in SetGeometry. Both the peg and the ball are apparance entities. The peg uses the simple physics and ball complex right now. Have to see what it does with a prism shaped cylinder.

I added a virtual “NotifyGenerationComplete” too whose base implementation calls the blueprint event - but something that my cpp could hook into easily enough. It gets called at all kinds of times because of that tick - and I’ve ended up not needing it after figuring out other things I could do in SetGeometry.

One other thing is that I was getting some crashes when I was editing entities. ClearContent and DestroyContent can be called after the object has already gone out of scope - so needed if (m_pEntityRendering) guards.

If you want a diff off the changes or something let me know.

2 Likes

apparance-ball-and-peg2
The peg set to 3 rather than 32 does act a bit differently with only that change made :slight_smile:
(Ball dropped from same place but ends up on different square)

Ooh, hacking around a bit, sneaky! A quick-and-dirty “auto-build collision from mesh” flag on the Entity (advanced) might be a thing to add here for now.

The generation virtual is probably a good thing to add too.

Are you using live-code/hot-reload at all? I wonder if that would cause crashes if you are tinkering with objects that are thrashed and tracked by the plugin? Might need more info on that one.

Happy to have a look over your changes, no guarantees though :wink:

Ha! I’m just hacking about - you are welcome to take them or not. Just not sure how you want to share the code? A link to an unlisted github gist on the forum? Something else?

Yes I used that flag controlled by a new boolean property on entity, and another one to say to use simple or not. On complex it will add the “parts” as the verts for the convex physics mesh. Not sure how far that assumption will hold - but does for these simple shapes anyway.

RE: Hotreload - no. I was deleting and placing blueprints derived from entity without making any native changes hotreloaded - least not on the runs I encountered the exception on.

Maybe just email me a zip :slight_smile:

Ah, ok, I’ll try and repro when I’m testing the blueprint placement work I’m going to be doing next week. I’m sure I can thrash the code enough to cause it to break.

Actually hmmm not sure if the physics sim is stable enough to do a proper comparison.

1 Like