Contents
INTRODUCTION
In the last lesson, we printed the faces of a cube using some OpenGL functions after having analyzed the rendering pipeline. Drawing polygons is the most critical step in a 3d engine because it involves almost all of the CPU processing power! Hence, I must first explain this stage of the pipeline in greater depth. I then begin with some general theory on how to implement the texture mapping technique. This theory is then put into practice using the OpenGL high-level commands.
DRAWING POLYGONS
Once upon a time there was MS DOS... When graphic engines were developed for this operating system, a lot of assembly code was required to draw a single point on the screen. In the Windows 95 Era, most developers still worked on their engine in a low-level approach since the hardware didn't allow fast applications to be created using graphics libraries. The first step was to initialize the graphic context using the interrupt 0x10 of the BIOS for VGA and VESA modes. Then the developer created all the basic functions to draw points, lines and so on. We are very lucky. Powerful modern PCs allows us to work under Windows, Linux or MAC without any loss of speed. Moreover, our dear OpenGL saves us days or even months of work by allowing us to fully exploit all the features of our expensive 3d video card.
We learned how to draw polygons as points, lines, or filled with colors using only a simple command (glPolygonMode) in the 2nd tutorial. I briefly explain the various methods here again:
- Draw polygon as points: the simplest and fastest approach, all the points corresponding to the polygon's vertices are printed on screen.
- Draw polygon as lines: also called Wireframe. The perimeter of the polygon is drawn connecting all the vertices through segments. A fast algorithm (Bresenham) is used to draw every segment.
- Polygon filled with color: requires more resources in comparison to the previous methods because it must print all the intemediate points to fill the polygon. Every horizontal line is drawn using the Bresenham algorithm starting from the upper vertex and linearly interpolating the edges of the polygon to find the beginning and end of every line.
- Polygon with texture mapping: texture mapping is a technique used to cover an object with an image. Each polygon of the object is assigned a small section of the image. The procedure to fill a polygon using texture mapping is very similar to the one used to fill it with color.
THE MAIN STEPS OF TEXTURE MAPPING
There are approximately three steps to add to our 3d engine to implement texture mapping:
- Load the texture in memory: the first thing to do is to create a function that is able to read an image file and save it in memory. We are going to use the Bitmap format since it is the most supported in the Windows environment and it doesn't include any compression algorithm which would only complicate matters right now.
- Assign a 2d point of the texture to every vertex: we must first add some fields to the structure obj_type at this point. The new variables will be used to match up each (3d) vertex with a (2d) point of the image, in order to cover the object as desired.
- Draw the polygons of the object "covering them" with sections of the texture: this phase (called the filling phase) is the most critical. We don't have to worry about coding this by hand since OpenGL already gives us some simple high level commands that make our life easy.
Well, enough of this chatter, let's get down to work...
LOADING AN IMAGE
First of all, let's include a new C/C++ file in our project and call it texture.cpp. This file will contain all the routines we need to manipulate images. Those of you not having much experience at this point may be afraid because we are using more than one file to create our project. Don't panic! We are actually simplifying the job! A 3d engine, just like any other program of a certain complexity, needs to have a modular structure. So, let's create the file and begin to write the function LoadBitmap:
int LoadBitmap(char *filename) { unsigned char *l_texture; int i, j=0; FILE *l_file; BITMAPFILEHEADER fileheader; BITMAPINFOHEADER infoheader; RGBTRIPLE rgb;
- unsigned char *l_texture; A pointer to the zone of memory where we will insert the image. Every point of the image is represented by 4 values of unsigned char (with a range of 0-255), one for each color component.
- int i, j=0; Some variables useful for iteration in this routine
- FILE * l_file; A pointer to the Bitmap file opened with the fopen function.
The next few variables are very interesting. In fact, they allow us to easily read a Bitmap file since they are structures made for this specific purpose.
- BITMAPFILEHEADER fileheader; This is our fileheader! This structure contains information regarding the type and the size of the bmp file to load. This structure only helps us find the zone of memory where the file is stored. The BITMAPINFOHEADER infoheader; structure will give us very important information: the width and the height of the image.
Next, the global variable num_texture which represents the number of the loaded texture (useful for OpenGL to reference that texture). is increased. Our function will return this value.
num_texture++;
Now we can open the file in read mode (if the file doesn't exist our function will return the value "-1"), read the fileheader and, fseek through the data. The pointer to the file is shifted up to the beginning of the next header.
if( (l_file = fopen(filename, "rb"))==NULL) return (-1); fread(&fileheader, sizeof(fileheader), 1, l_file); fseek(l_file, sizeof(fileheader), SEEK_SET);
Let's go there now and read the infoheader!
fread(&infoheader, sizeof(infoheader), 1, l_file); l_texture = (byte *) malloc(infoheader.biWidth * infoheader.biHeight * 4); memset(l_texture, 0, infoheader.biWidth * infoheader.biHeight * 4);
- The fields biWidth and biHeights of the infoheader contain the width and the height of the image. These values will be used to assign the exact quantity of memory for storing the texture. In fact, the size of the image is measured by its height x, its width x and its color depth. Bitmap images have a color depth of 3 bytes. Each byte is a color component of either red, green or blue. This method of storing the image is called RGB.
- The malloc function assigs a zone of memory to the pointer variable l_texture.
- Since l_texture is now full of junk values, we use the function memset to clean that zone of memory, and fill l_texture with zeros.
Now... we have our zone of memory, ready and cleaned! There is only a thing to do: fill it with the image! So let's write this little algorithm:
for (i=0; i < infoheader.biWidth*infoheader.biHeight; i++) { fread(&rgb, sizeof(rgb), 1, file); l_texture[j+0] = rgb.rgbtRed; // Red component l_texture[j+1] = rgb.rgbtRed; // Green component l_texture[j+2] = rgb.rgbtBlue; // Blue component l_texture[j+3] = 255; // Alpha value j += 4; // Go to the next position }
Many of you may already have an understanding of how an image is saved in a file. Basically, every point of the texture (refered to as TEXEL from now on) is represented by 3 values RGB. The whole image is a vast series of these points placed side by side. When a line is complete the next point begins on the underlying line starting from the left.
Using fread our FOR loop first reads a single point of the image along with the three RGB values. The variable rgb used by this function is defined implicitly in windows.h (along with BITMAPFILEHEADER and BITMAPINFOHEADER) and is composed of 3 byte values (rgb.rgbtRed, rgb.rgbtGreen, rgb.rgbtBlue).
The next lines save the contents of every RGB component in l_texture increasing the pointer by one for four values. Why four and not three? We have read an image with a color depth of 3 and have created a texture that has a color depth of 4?! There is another component set to 255... What is it? This is the Alpha component! The Alpha component doesn't interest us for now. It will be very useful however when I introduce the topic of Blending where it can be used for representing the transparency level of the texture. So stop complaining (otherwise I will insert another component! ;-P) and let's continue...
fclose(l_file); // Closes the file stream
The file can now be closed since we have finished reading the image data. Our texture is stored in l_texture ready to be used! Nice isn't it? We will communicate all our happiness to OpenGL and it will immediately reward us by giving us more work to do...;-) I will need to introduce a series of OpenGL commands that will define some parameters which are needed in order to correctly interface our "raw" texture with the OpenGL layer.
The first thing we need to do is tell OpenGL what texture number to use. The global variable num_texture holds this value and is increased for every call to the function LoadBitmap. Therefore every image we load will have one unique number. We then need to use some function calls to set some very important parameters. The overall quality of the final result will depend on the parameters used in these calls. However, keep in mind that the better the result, the more rendering time and processing power will be required, and that means a lower FPS count.
Finally, we give OpenGL the pointer to the zone of memory where the image is saved.
glBindTexture(GL_TEXTURE_2D, num_texture);
- glBindTexture(GLenum target, GLuint texture); This function specifies the id of the current texture. OpenGL needs to know the number of the current texture and the "texturing target" before doing any other operations (GL_TEXTURE_1D, for uni-dimensional texture or GL_TEXTURE_2D for a 2d texture). This command must be used in both the texture loading phase and the rendering phase.
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_NEAREST);
- void glTexParameterf( GLenum target, GLenum pname, GLfloat param ); This function sets some important parameters for the rendering of the texture. In target we must insert GL_TEXTURE_1D or GL_TEXTURE_2D (same as the last function). pname holds the name of the OpenGL parameter to modify.
- The parameters GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T select the behaviour of OpenGL when the coordinates of the texture go over the limits (generally 0,1). If we use the value GL_REPEAT in param (as we did) the result is a repetition of the texture starting from the beginning. For example, suppose you need to draw a floor and you need a texture that reproduces the look of tile. In this case, the tile would be repeated every time the end of the coordinate (s or t) is reached. We must plan the texture coordinates in the correct way to do this but don't worry about that particluar aspect for now. If the value GL_CLAMP is used in param the last pixel will be used to continue the mapping. This causes OpenGL to to fill the remaining area using the last pixel rather than repeat the texture.
- GL_TEXTURE_MAG_FILTER and GL_TEXTURE_MIN_FILTER tell OpenGL how to behave when the texels have to be drawn in a greater or smaller space than their dimensions. This is due to the fact that a single texel will rarely correspond to a single pixel on the screen after the texture is applied to the object and the scene has been subjected to all the transformations (modeling, viewing etc.). If our object has been placed very far from the point of view, each pixel of the screen corresponds to more than one texel. However, if the object is drawn very close to the point of view a single texel requires many pixels. In both cases the problem is to decide what texel to draw and how to filter the image in order to avoid mangled results. If we use the GL_NEAREST parameter, the texel nearest a given pixel will be drawn. The GL_LINEAR parameter uses a weighted average on a 2x2 array of texels sourrounding the pixel. There are other parameters that make use of something called called MipMaps (when our image needs to be smaller). However, I will not get into too many OpenGL details for now (there are special books for this). Our overall purpose is to create a 3d engine so the functions will be analysed gradually.
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glTexImage2D(GL_TEXTURE_2D, 0, 4, infoheader.biWidth, infoheader.biHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, l_texture);
- void glTexEnvf( GLenum target, GLenum pname, GLfloat param ); Normally when texture mapping is performed only the texels are used to draw every point on the surface of our object. However, this function allows us to modulate the colors of the texture with those colors that the polygon would have without texture mapping. We must insert the parameter GL_TEXTURE_ENV in target. In the pname parameter GL_TEXTURE_ENV_MODE is used. We then must decide the way to combine the colors, valid parameters are: GL_DECAL, GL_REPLACE, GL_MODULATE and GL_BLEND. There is no need to modulate the colors of the texture map in our example. We want to use the colors of the texels. Therefore, we use GL_REPLACE in that field.
- void glTexImage2D( GLenum target, GLint level, GLint internalformat, GLsizei width, GLsizei height, GLint border, GLenum format, GLenum type, const GLvoid *pixels ); This command is very important since it allows us to define a 2d texture. Again, GL_TEXTURE_2D is the first parameter. The level parameter is used to set the number of the texture used for different levels of detail. These textures will be created automatically, therefore, we put 0 as the value. We insert the value 4 (the RGBA format) in the internalformat parameter which specifies the internal storage format of our image. The following two parameters denote the width and height dimensions of the texture. We don't need a texture border now so this field is 0. The parameters format and type describe the format and the type of data in which our texture has been saved in memory. We must use the values GL_RGBA and GL_BYTE in these parameters. Finally, we specify the pointer to the zone of memory where we have stored our beloved texture in *pixels
I have already spoken about the problems involved when there is not an exact correspondence between the texels and the pixels on the screen. Obvioulsly, objects could be very far from the point of view, especially in an engine like ours in which we will use "spatial" coordinates. What happens really is due to perspective projection. The apparent size of each object is very dependant on the distance it is from the point of view. This is a slight problem. It's not easy to maintain good image quality if the textures are rendered very far away, even with particular systems of filtration. A solution to this problem is the use of Mipmaps.
Mipmaps are a series of textures derived from the main texture and already filtered. Each Minimap has a smaller resolution than the previous one. It starts from the basic texture with native dimensions of the power of 2 (for instance 256x256). The consequential Mipmaps will have dimensions halved: 128x128, 64x64, 32x32, etc. Depending on the distance, only the texture that best satisfies the principle that texel size = 1 will be drawn. We can manually insert these textures, if they are ready by calling the function glTexImage for each MipMap. Each call to this function resets the parameters according to the number of Mipmaps defined and inserts the new texture size and a different zone of memory for storage every time. However, our Mipmaps have not been defined yet! In fact we won't define them by hand because there is a beautiful function in the GLU utility library that will do this job for us:
gluBuild2DMipmaps(GL_TEXTURE_2D, 4, infoheader.biWidth, infoheader.biHeight, GL_RGBA, GL_UNSIGNED_BYTE, l_texture);
- void gluBuild2DMipmaps( GLenum target, GLint internalformat, GLsizei width, GLsizei height, GLenum format, GLenum type, const GLvoid *pixels ); notice that the parameters of this function are the same of the previous one. Only level and border are not defined here. Once declared (using the same values of glTexImage2D) the Mipmaps will be automatically created using the texture pointed to by *pixel as the starting point. Great, isn't it?
Well, now we can free the zone of memory holding the image.
free(l_texture);
Some of you may think that this procedure deletes the current texture we have loaded. Don't worry. OpenGL automatically stores the textures in its internal memory. So, we don't need to leave the old zone of memory full of values that we won't use anymore. Lastly, the number of the texture loaded is returned to the caller.
return (num_texture); }
You are a little bit worn-out, aren't you? Well! That's a good sign after all we are programmers and stress is our best and faithful friend... Now let's relax a little bit with phase 2. It's not too hard to understand...
ASSIGN A 2D POINT OF THE TEXTURE TO EVERY VERTEX
With our texture already defined, we now need to modify our structure object_type by adding some fields. Our goal here is to associate each vertex to a 2d point of the image, in order to cover the object as desired. Let's define a new type:
typedef struct{ float u,v; }mapcoord_type;
The mapcoord_type structure initializes 2 variables u and v. These are just a couple of coordinates used to identify a 2d point in a texture. Why are we calling two coordinate variables u and v rather than x and y? The reason is that the coordinates of a texture have been called u and v by convention for a long time. The main reason for this is to avoid confusion with the coordinates x, y and z used for vertices. We must assign a point of the texture to each vertex. In essence, a small triangular section of the image must be assigned to every triangle contained in our object. Therefore, we will modify the obj_type structure to account for texture coordinates :
/*** The object type ***/ typedef struct { vertex_type vertex[MAX_VERTICES]; polygon_type polygon[MAX_POLYGONS]; mapcoord_type mapcoord[MAX_VERTICES]; int id_texture; } obj_type, *obj_type_ptr;
Two things have been added: an array called mapcoord, used to store the texture coordinates for each vertex and a variable named id_texture, used to hold the current texture id number (the return value of the function LoadBitmap). Now, the structure obj_type must be filled, please have a careful look at the following code:
obj_type cube = { { -10, -10, 10, // vertex v0 10, -10, 10, // vertex v1 10, -10, -10, // vertex v2 -10, -10, -10, // vertex v3 -10, 10, 10, // vertex v4 10, 10, 10, // vertex v5 10, 10, -10, // vertex v6 -10, 10, -10 // vertex v7 }, { 0, 1, 4, // polygon v0,v1,v4 1, 5, 4, // polygon v1,v5,v4 1, 2, 5, // polygon v1,v2,v5 2, 6, 5, // polygon v2,v6,v5 2, 3, 6, // polygon v2,v3,v6 3, 7, 6, // polygon v3,v7,v6 3, 0, 7, // polygon v3,v0,v7 0, 4, 7, // polygon v0,v4,v7 4, 5, 7, // polygon v4,v5,v7 5, 6, 7, // polygon v5,v6,v7 3, 2, 0, // polygon v3,v2,v0 2, 1, 0 // polygon v2,v1,v0 }, { 0.0, 0.0, // mapping coordinates for vertex v0 1.0, 0.0, // mapping coordinates for vertex v1 1.0, 0.0, // mapping coordinates for vertex v2 0.0, 0.0, // mapping coordinates for vertex v3 0.0, 1.0, // mapping coordinates for vertex v4 1.0, 1.0, // mapping coordinates for vertex v5 1.0, 1.0, // mapping coordinates for vertex v6 0.0, 1.0 // mapping coordinates for vertex v7 }, 0, // identifier for the texture };
Things are going well: the texture is defined and the object structure is filled. There's nothing left to do but draw our cube using texture mapping!
DRAW THE POLYGONS "COVERING THEM" WITH SECTIONS OF THE TEXTURE
The first function to be modified is "init". During the procedure of initialization we have to enable texture mapping and call the function LoadBitmap to load the texture:
glEnable(GL_TEXTURE_2D); cube.id_texture=LoadBitmap("texture1.bmp"); if (cube.id_texture==-1) { MessageBox(NULL,"Image file: texture1.bmp not found", "Spacesimulator.net", MB_OK | MB_ICONERROR); exit (0); }
- glEnable(GL_TEXTURE_2D); enables 2d texture mapping. Notice that the return value of the function LoadBitmap is an identifier used to assign the texture to the object. When we implement the ability to manage more objects in our engine we will assign a different number to each texture we use. If the function LoadBitmap has not succeeded in loading the image, a MessageBox with the error message will be shown and the program will be interrupted.
Now let's modify the drawing function and remove the calls made to glColor3f because we don't need to assign colors to the vertices anymore. Instead we use the following:
glBindTexture(GL_TEXTURE_2D, cube.id_texture); glBegin(GL_TRIANGLES); for (l_index=0;l_index<12;l_index++) { /*** FIRST VERTEX ***/ glTexCoord2f( cube.mapcoord[ cube.polygon[l_index].a ].u, cube.mapcoord[ cube.polygon[l_index].a ].v); glVertex3f( cube.vertex[ cube.polygon[l_index].a ].x, cube.vertex[ cube.polygon[l_index].a ].y, cube.vertex[ cube.polygon[l_index].a ].z); /*** SECOND VERTEX ***/ glTexCoord2f( cube.mapcoord[ cube.polygon[l_index].b ].u, cube.mapcoord[ cube.polygon[l_index].b ].v); glVertex3f( cube.vertex[ cube.polygon[l_index].b ].x, cube.vertex[ cube.polygon[l_index].b ].y, cube.vertex[ cube.polygon[l_index].b ].z); /*** THIRD VERTEX ***/ glTexCoord2f( cube.mapcoord[ cube.polygon[l_index].c ].u, cube.mapcoord[ cube.polygon[l_index].c ].v); glVertex3f( cube.vertex[ cube.polygon[l_index].c ].x, cube.vertex[ cube.polygon[l_index].c ].y, cube.vertex[ cube.polygon[l_index].c ].z); } glEnd();
- void glBindTexture (GLenum target, GLuint texture); We have already looked at this function during the initialization of the texture. The difference now is that in the parameter texture is the identifier of the texture for the current object. Therefore, this function lets OpenGL know what the the active texture is.
- void glTexCoord2f (GLfloat s, GLfloat t); Each call to this function defines the two coordinates u and v of the texture (what OpenGL respectively calls s and t). This function must be inserted between the two commands glBegin and glEnd, and it must be called every time there is a call to glVertex3f. This is how a coordinate of the texture is assigned to every vertex.
THE LAST THING
The only thing left to do is insert another file into our project, and call it "texture.h" Then, add these two lines of code to it:
extern int num_texture; extern int LoadBitmap(char *filename);
You must then insert the file in the "include" section of the main cpp file:
#include "texture.h"
If you are using the command line to compile this project remember to compile the texture.cpp file or add it to the makefile.
Nothing could be easier right? =)
There are a few things to consider now about our code. If you are tired you can just skip to the conclusion. However, I must explain a little problem that you may have noticed.
A LITTLE TROUBLE...
The more observant of you may have noticed that we have only drawn the complete pattern of the texture on two faces of the cube. In fact, only the faces made up by the vertices v0,v1,v5,v4 and v3,v2,v6,v7 have the complete texture drawn. This is because the mapping coordinates are correct only for the 4 triangles that compose those 2 faces. Unfortunately, there is really nothing we can do for the remaining faces.
The cause of this anomaly is due to the fact that we have tightly coupled one and only one texture coordinate to every vertex. Therefore, some faces were forced to sacrifice their texture coordinates because they share their vertices.
There are some solutions to this problem :
1-Rather than have a one to one relationship between each texture coordinate and each vertex, use the polygon_type and add a texture coordinate to every point of the polygon. Now, a vertex can have more than one texture coordinate and the problem is solved. Here is an example:
typedef struct{ int a,b,c; mapcoord_type map_a, map_b, map_c;//Every point of the polygon has a point u,v in the texture }polygon_type;
2-Add the number of vertices necessary to draw the texture on every triangle correctly. This also solves the problem, but we have made processing a lot heavier. The number of vertices will increase dramaticaly. Our cube, for example, would require 4 vertices for each face. 4 vertices x 6 faces = 24 vertices! This approach would be necessary when a certain uniformity in the object must be maintained.
So, guess which solution we're going to use? Are you thinking solution 1? Nope! I will use 2! Why have I decided to use this more complex solution? A complex figure rarely needs to have a lot of vertices added to it in order to maintain uniformity with its texture coordinates. Moreover, almost all 3d engines use solution 2. The .3ds format that we will use for loading models also works this way.
However, this whole discussion is kind of useless right now because in this tutorial we will simply draw the cube as it is for now, with its 4 ugly faces and only two of which have the texture correctly applied. We don't need to worry about implementing solution 2 by hand because the 3ds format will help us do this. In fact, 3d studio will automatically add vertices where necessary.
Why did I start talking about this? Well, simply put, because I don't think its good a idea to keep on working leaving this problem behind us. I have wasted a lot of time on these issues in the past and don't want you to stuggle needlessly with them.
CONCLUSIONS
Well, we are done! I don't know about you but I am really tired! Guess what beautiful surprise I have saved for you in the next lesson? You will no longer see that horrendous cube anymore on the screen. We are going program a model loader for objects in the .3ds format! I'm sure many of you are familiar with 3d Studio and an infinite number of objects can be found in the 3ds format (many of these are spaceships!) on the internet.
But for now I will leave you with your great cube =) Have a good time!
SOURCE CODE
The Source Code of this lesson can be downloaded from the Tutorials Main Page