# Why are texture coordinates often called UVs?

Computer Graphics Asked by samanthaj on August 27, 2021

Is there some historical reason texture coordinates are often called UVs? I get that vertex positions are x, y, z but even OpenGL has TEXTURE_WRAP_S and TEXTURE_WRAP_T and GLSL has aliases so if texture coord is in a vec you can access it with

someVec.st

but not

someVec.uv  (these would be the 3rd and 4th elements of the vector)

And yet pretty much every modeling package calls them UVs Maya, Blender, Unity, Unreal, 3dsmax

Where does the term UVs come from? Is this a known part of computer graphics history or is the reason they are called UVs lost in pixels of cg time?

In math, geometry and physics it is common practice to use the coordinates $$(u,v)$$ to represent an arbitrary parameterization, including those of a surface in a 3d Euclidean space. Since the coordinates of the parameterisation might be arbitrary (it could be an angle, or a function of the Euclidean coordinates $$(x,y,z)$$, or something else), it is helpful to distinguish them from the coordinates used to represent the wider Euclidean space in which the surface exists.

The $$(u,v)$$-notation caught on in computer graphics for the same reason: it clarifies that the coordinates used to index into your texture do not necessarily align with the world space (or view space) coordinates $$(x,y,z)$$, but is an index into a parameterisation of a surface in that space.

Answered by Jessica Hansen on August 27, 2021

This is not a definitive answer, but it is generally accepted that Ed Catmull introduced Texture Mapping in his 1974 thesis, "A SUBDIVISION ALGORITHM FOR COMPUTER DISPLAY OF CURVED SURFACES"

In that, he uses (U,V) to access the image data (see the page labeled 36 in the above)

MAPPING
Photographs, drawings, or any picture can be mapped onto bivariate patches. This is one of the most interesting consequences of the patch splitting algorithm. It gives a method for putting texture, drawings, or photographs onto surfaces....

...If a photograph is scanned in at a resolution of x times y then every element can be referenced by u·x and v·y where 0<=u,v<=1. In general, one could think of the intensity as a function l(u,v) where I references a picture.

I believe this thesis also introduced the concept of the Z-Buffer (Page 32)

Answered by Simon F on August 27, 2021

## Related Questions

### Corrupt values when writing and reading from the same RWTexture2D in HLSL/DirectX?

0  Asked on August 27, 2021 by b1skit

### BRDF for point lights should not return values over 1

1  Asked on August 27, 2021 by emil-kabirov

### Approximating Geodesics in a half edge DS, how can I refine my mesh to get good approximations

1  Asked on August 27, 2021 by makogan

### How to translate the center of an equirectangular projection?

1  Asked on March 3, 2021 by lucio-coire-galibone

### Properties of the image reconstruction filter in rendering

2  Asked on February 10, 2021

### Dynamic Array in GLSL

3  Asked on January 13, 2021 by archmede

### Applying voltages to conductors in passive-matrix LCD

0  Asked on January 5, 2021 by bisma

### Combine box shadow with a signed distance field

0  Asked on January 2, 2021 by weichsem

### Why is eye-based ray tracing preferred over light-based ray tracing?

2  Asked on August 12, 2020 by jheindel

### Aligning (matching) colors (white balance, brightness) of two scenes based on reference object

0  Asked on July 23, 2020 by przemek-b