TransWikia.com

How are textures projected onto 3d models in texture painting applications

Computer Graphics Asked by Lenny White on August 27, 2021

In most modeling software you can texture paint onto 3d models using so called stencil textures. They basically project a texture from the viewport view onto the model as seen for instance in the image below from Blender.

What is the general technique used to project textures like this onto 3d models?

enter image description here

enter image description here

One Answer

I don't know if there is a smarter/more efficient way to do it, but a possible approach is the following:

Step 1 - Projection volume

First, you calculate the "projection volume" where its near plane corners are identical to the texture corners on your screen. You can do this by transforming the screen coordinates of the textures' corners into the range [-1, 1]. Then you use those coordinates to create a quad in NDC by combining the transformed x-y coordinates with z=-1 (near plane) and z=1 (far plane).

Step 2 - Collision test

Now you need to have both, your model and your "projection volume", in the same space. You can either transform the "projection volume" into your model space or your model into NDC. I think the second choice makes the next step a little bit easier.

Now you have to find all triangles that are fully or partially inside the "projection volume". This is basically a collision detection check (triangle vs. quad or frustum) and I won't go into the details here. For all triangles that are fully inside the volume, you now have a connection between each vertex and its position inside the texture you want to project. Simply transform the vertex position into screen space and from there to the textures pixel coordinates. The second transformation depends on where you positioned your texture on the screen and how you have scaled it.

For all partially covered triangles, the basic procedure is the same, but you have to introduce extra "helper vertices" and to transform them too to separate the covered from the uncovered areas. Doesn't mean that you have to really split your triangles into multiple smaller ones, but at least you need the extra vertices as intermediate data for the final step.

Step 3 - Adjust model data

In the last step, you have to bake this information somehow into your model. This totally depends on your model's data structure and layout. You can for example just split partially covered triangles into new ones and then set the texCoord of every vertex that is inside the "projection volume" accordingly.

If you have some kind of model-specific UV-mapped textures, you would now use the information of the previous step to interpolate and copy the data of your projected texture to your model texture. For each vertex, you know where it is located inside the projected texture and inside its models UV texture. Now you basically cut out each triangle from your projected texture, and transform it into the corresponding triangle in your UV texture. Notice that this is not trivial since the triangle usually gets distorted and scaled. However, you can use the graphics pipeline for that by rendering directly to your UV-texture.

There are probably a lot of other data representations that might require a different approach for this final step. However, the key here is the second step which establishes the connection between your vertices and the texture. Once you have this, it shouldn't be too hard to come up with a method to add the projected texture to your model.

Further information

During the second step, you might also need to perform some kind of depth testing, so that only the closest area gets affected and not all sides that are inside the projection volume.

Correct answer by wychmaster on August 27, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP