Javascript required
Skip to content Skip to sidebar Skip to footer

Use Gl.triangles to Draw a Circle

The graphics pipeline

By learning OpenGL, you've decided that you want to exercise all of the difficult work yourself. That inevitably means that y'all'll be thrown in the deep, but once you lot empathise the essentials, you'll meet that doing things the difficult way doesn't have to be so difficult later all. To top that all, the exercises at the end of this chapter will show you the sheer amount of control you lot have over the rendering procedure past doing things the modern way!

The graphics pipeline covers all of the steps that follow each other up on processing the input data to go to the final output epitome. I'll explicate these steps with assistance of the following analogy.

Information technology all begins with the vertices, these are the points from which shapes like triangles will later on be constructed. Each of these points is stored with certain attributes and it's up to you to decide what kind of attributes yous want to store. Commonly used attributes are 3D position in the world and texture coordinates.

The vertex shader is a small-scale programme running on your graphics card that processes every 1 of these input vertices individually. This is where the perspective transformation takes place, which projects vertices with a 3D world position onto your 2D screen! It also passes important attributes like color and texture coordinates further down the pipeline.

After the input vertices accept been transformed, the graphics card will grade triangles, lines or points out of them. These shapes are called primitives because they grade the footing of more circuitous shapes. There are some additional drawing modes to cull from, like triangle strips and line strips. These reduce the number of vertices you need to pass if you desire to create objects where each adjacent primitive is connected to the concluding one, similar a continuous line consisting of several segments.

The following pace, the geometry shader, is completely optional and was but recently introduced. Unlike the vertex shader, the geometry shader tin can output more than data than comes in. Information technology takes the primitives from the shape assembly stage as input and can either pass a primitive through downwardly to the rest of the pipeline, modify information technology first, completely discard information technology or even replace it with other primitive(s). Since the communication between the GPU and the rest of the PC is relatively slow, this stage can help yous reduce the amount of data that needs to be transferred. With a voxel game for example, you lot could pass vertices as betoken vertices, along with an aspect for their world position, colour and material and the bodily cubes can be produced in the geometry shader with a signal as input!

After the terminal list of shapes is composed and converted to screen coordinates, the rasterizer turns the visible parts of the shapes into pixel-sized fragments. The vertex attributes coming from the vertex shader or geometry shader are interpolated and passed as input to the fragment shader for each fragment. As you tin can see in the image, the colors are smoothly interpolated over the fragments that brand upwards the triangle, fifty-fifty though simply three points were specified.

The fragment shader processes each individual fragment along with its interpolated attributes and should output the final colour. This is usually washed past sampling from a texture using the interpolated texture coordinate vertex attributes or simply outputting a color. In more advanced scenarios, at that place could as well exist calculations related to lighting and shadowing and special effects in this program. The shader as well has the power to discard certain fragments, which means that a shape will be encounter-through in that location.

Finally, the end consequence is composed from all these shape fragments by blending them together and performing depth and stencil testing. All y'all need to know about these terminal two correct now, is that they permit y'all to use additional rules to throw abroad certain fragments and allow others pass. For example, if one triangle is obscured past some other triangle, the fragment of the closer triangle should end upwards on the screen.

Now that yous know how your graphics card turns an array of vertices into an image on the screen, let's get to work!

Vertex input

The showtime thing you have to decide on is what data the graphics card is going to need to draw your scene correctly. As mentioned above, this information comes in the form of vertex attributes. You're costless to come up with any kind of attribute you want, but information technology all inevitably begins with the earth position. Whether you're doing 2D graphics or 3D graphics, this is the attribute that will determine where the objects and shapes end up on your screen in the finish.

Device coordinates

When your vertices have been processed by the pipeline outlined to a higher place, their coordinates volition take been transformed into device coordinates. Device Ten and Y coordinates are mapped to the screen between -i and 1.

![](media/img/c2_dc.png)


>

Simply like a graph, the center has coordinates (0,0) and the y centrality is positive in a higher place the middle. This seems unnatural because graphics applications usually have (0,0) in the top-left corner and (width,elevation) in the bottom-right corner, but it's an excellent fashion to simplify 3D calculations and to stay resolution contained.

The triangle above consists of three vertices positioned at (0,0.v), (0.5,-0.5) and (-0.5,-0.5) in clockwise club. It is articulate that the but variation between the vertices hither is the position, then that's the simply aspect we need. Since nosotros're passing the device coordinates directly, an X and Y coordinate suffices for the position.

OpenGL expects you to ship all of your vertices in a single array, which may be confusing at first. To understand the format of this assortment, let's see what it would await similar for our triangle.

          float vertices[] = {      0.0f,  0.5f, // Vertex ane (X, Y)      0.5f, -0.5f, // Vertex two (X, Y)     -0.5f, -0.5f  // Vertex 3 (X, Y) };                  

As you tin run across, this assortment should but be a list of all vertices with their attributes packed together. The order in which the attributes appear doesn't affair, as long as it's the same for each vertex. The society of the vertices doesn't accept to be sequential (i.e. the order in which shapes are formed), but this requires u.s. to provide actress data in the form of an chemical element buffer. This will be discussed at the end of this affiliate as it would just complicate things for now.

The adjacent step is to upload this vertex data to the graphics bill of fare. This is important because the retentiveness on your graphics card is much faster and you won't take to send the data again every time your scene needs to exist rendered (near lx times per 2d).

This is done by creating a Vertex Buffer Object (VBO):

          GLuint vbo; glGenBuffers(1, &vbo); // Generate 1 buffer                  

The retention is managed past OpenGL, and then instead of a pointer you get a positive number as a reference to information technology. GLuint is simply a cantankerous-platform substitute for unsigned int, just like GLint is one for int. You lot will need this number to make the VBO active and to destroy it when you're done with it.

To upload the actual data to information technology you first accept to go far the agile object by calling glBindBuffer:

          glBindBuffer(GL_ARRAY_BUFFER, vbo);                  

Equally hinted past the GL_ARRAY_BUFFER enum value there are other types of buffers, merely they are not important right now. This statement makes the VBO we just created the active array buffer. Now that it's active we tin copy the vertex data to it.

          glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW);                  

Notice that this part doesn't refer to the id of our VBO, simply instead to the agile array buffer. The second parameter specifies the size in bytes. The final parameter is very of import and its value depends on the usage of the vertex data. I'll outline the ones related to cartoon hither:

  • GL_STATIC_DRAW: The vertex information will be uploaded once and fatigued many times (due east.g. the earth).
  • GL_DYNAMIC_DRAW: The vertex information volition be created in one case, inverse from time to time, but drawn many times more than than that.
  • GL_STREAM_DRAW: The vertex data will exist uploaded once and drawn once.

This usage value will decide in what kind of retentivity the data is stored on your graphics card for the highest efficiency. For case, VBOs with GL_STREAM_DRAW as type may store their data in memory that allows faster writing in favour of slightly slower drawing.

The vertices with their attributes have been copied to the graphics card at present, but they're not quite ready to be used yet. Recollect that we tin can make up any kind of attribute nosotros want and in any society, and then now comes the moment where y'all accept to explain to the graphics carte du jour how to handle these attributes. This is where you'll see how flexible modernistic OpenGL really is.

Shaders

Every bit discussed earlier, there are three shader stages your vertex information will pass through. Each shader stage has a strictly defined purpose and in older versions of OpenGL, y'all could only slightly tweak what happened and how it happened. With modernistic OpenGL, information technology's up to usa to instruct the graphics card what to do with the data. This is why it'southward possible to decide per application what attributes each vertex should have. You'll take to implement both the vertex and fragment shader to get something on the screen, the geometry shader is optional and is discussed afterwards.

Shaders are written in a C-manner linguistic communication called GLSL (OpenGL Shading Language). OpenGL will compile your plan from source at runtime and copy it to the graphics carte. Each version of OpenGL has its own version of the shader linguistic communication with availability of a certain feature set and nosotros will be using GLSL 1.fifty. This version number may seem a bit off when we're using OpenGL 3.2, but that'south considering shaders were only introduced in OpenGL ii.0 as GLSL 1.10. Starting from OpenGL 3.3, this problem was solved and the GLSL version is the aforementioned as the OpenGL version.

Vertex shader

The vertex shader is a program on the graphics card that processes each vertex and its attributes as they appear in the vertex array. Its duty is to output the terminal vertex position in device coordinates and to output any data the fragment shader requires. That's why the 3D transformation should take identify hither. The fragment shader depends on attributes like the color and texture coordinates, which will usually exist passed from input to output without any calculations.

Remember that our vertex position is already specified as device coordinates and no other attributes exist, then the vertex shader will be fairly bare basic.

          #version 150 core  in vec2 position;  void main() {     gl_Position = vec4(position, 0.0, ane.0); }                  

The #version preprocessor directive is used to betoken that the lawmaking that follows is GLSL 1.fifty code using OpenGL's core profile. Next, we specify that at that place is only one attribute, the position. Autonomously from the regular C types, GLSL has built-in vector and matrix types identified by vec* and mat* identifiers. The type of the values within these constructs is always a bladder. The number after vec specifies the number of components (10, y, z, w) and the number after mat specifies the number of rows /columns. Since the position attribute consists of only an X and Y coordinate, vec2 is perfect.

You can be quite creative when working with these vertex types. In the example higher up a shortcut was used to set the start ii components of the vec4 to those of vec2. These two lines are equal:

            gl_Position = vec4(position, 0.0, one.0); gl_Position = vec4(position.x, position.y, 0.0, 1.0);                      

When you're working with colors, you can also admission the private components with r, g, b and a instead of x, y, z and westward. This makes no deviation and can help with clarity.

The final position of the vertex is assigned to the special gl_Position variable, because the position is needed for primitive associates and many other built-in processes. For these to function correctly, the last value west needs to take a value of ane.0f. Other than that, yous're costless to practise anything you want with the attributes and we'll meet how to output those when nosotros add color to the triangle subsequently in this chapter.

Fragment shader

The output from the vertex shader is interpolated over all the pixels on the screen covered by a primitive. These pixels are called fragments and this is what the fragment shader operates on. Just similar the vertex shader it has i mandatory output, the terminal color of a fragment. Information technology's up to you to write the code for calculating this color from vertex colors, texture coordinates and any other data coming from the vertex shader.

Our triangle only consists of white pixels, so the fragment shader simply outputs that color every time:

          #version 150 core  out vec4 outColor;  void primary() {     outColor = vec4(ane.0, 1.0, ane.0, one.0); }                  

You lot'll immediately notice that we're not using some built-in variable for outputting the colour, say gl_FragColor. This is considering a fragment shader can in fact output multiple colors and we'll meet how to handle this when actually loading these shaders. The outColor variable uses the type vec4, because each color consists of a red, green, blue and blastoff component. Colors in OpenGL are generally represented as floating point numbers between 0.0 and ane.0 instead of the common 0 and 255.

Compiling shaders

Compiling shaders is piece of cake once you lot have loaded the source lawmaking (either from file or as a hard-coded string). You can hands include your shader source in the C++ lawmaking through C++11 raw cord literals:

          const char* vertexSource = R"glsl(     #version 150 core      in vec2 position;      void main()     {         gl_Position = vec4(position, 0.0, 1.0);     } )glsl";                  

Just similar vertex buffers, creating a shader itself starts with creating a shader object and loading data into it.

          GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexSource, NULL);                  

Different VBOs, you tin can simply laissez passer a reference to shader functions instead of making it agile or anything similar that. The glShaderSource office can have multiple source strings in an array, but you lot'll usually accept your source lawmaking in i char array. The last parameter can contain an array of source lawmaking cord lengths, passing Cypher simply makes it stop at the null terminator.

All that'south left is compiling the shader into code that tin be executed past the graphics card now:

          glCompileShader(vertexShader);                  

Be enlightened that if the shader fails to compile, east.chiliad. because of a syntax error, glGetError will not report an error! See the block below for info on how to debug shaders.

Checking if a shader compiled successfully

            GLint status; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &status);                      

If status is equal to GL_TRUE, and then your shader was compiled successfully.

Retrieving the compile log

            char buffer[512]; glGetShaderInfoLog(vertexShader, 512, NULL, buffer);                      

This volition store the starting time 511 bytes + cipher terminator of the compile log in the specified buffer. The log may also study useful warnings even when compiling was successful, so it'south useful to check it out from time to time when you develop your shaders.

The fragment shader is compiled in exactly the same fashion:

          GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, ane, &fragmentSource, Zip); glCompileShader(fragmentShader);                  

Again, be sure to cheque if your shader was compiled successfully, considering it will save you from a headache later on.

Combining shaders into a program

Up until now the vertex and fragment shaders accept been two split objects. While they've been programmed to piece of work together, they aren't really connected yet. This connection is made past creating a program out of these two shaders.

          GLuint shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader);                  

Since a fragment shader is allowed to write to multiple framebuffers, you need to explicitly specify which output is written to which framebuffer. This needs to happen before linking the plan. All the same, since this is 0 by default and there'due south but one output right at present, the post-obit line of lawmaking is not necessary:

          glBindFragDataLocation(shaderProgram, 0, "outColor");                  

Use glDrawBuffers when rendering to multiple framebuffers, because only the first output will be enabled by default.

Subsequently attaching both the fragment and vertex shaders, the connectedness is made by linking the program. It is allowed to make changes to the shaders after they've been added to a programme (or multiple programs!), simply the actual result will not change until a program has been linked again. It is also possible to attach multiple shaders for the aforementioned phase (e.thousand. fragment) if they're parts forming the whole shader together. A shader object tin can exist deleted with glDeleteShader, but it will not actually be removed earlier it has been detached from all programs with glDetachShader.

          glLinkProgram(shaderProgram);                  

To actually start using the shaders in the program, you just have to telephone call:

          glUseProgram(shaderProgram);                  

Just similar a vertex buffer, just one program tin can be active at a time.

Making the link betwixt vertex data and attributes

Although nosotros have our vertex data and shaders now, OpenGL still doesn't know how the attributes are formatted and ordered. You beginning need to retrieve a reference to the position input in the vertex shader:

          GLint posAttrib = glGetAttribLocation(shaderProgram, "position");                  

The location is a number depending on the guild of the input definitions. The start and only input position in this example will ever have location 0.

With the reference to the input, you can specify how the data for that input is retrieved from the array:

          glVertexAttribPointer(posAttrib, 2, GL_FLOAT, GL_FALSE, 0, 0);                  

The first parameter references the input. The second parameter specifies the number of values for that input, which is the aforementioned every bit the number of components of the vec. The third parameter specifies the type of each component and the quaternary parameter specifies whether the input values should be normalized between -1.0 and 1.0 (or 0.0 and i.0 depending on the format) if they aren't floating betoken numbers.

The terminal 2 parameters are arguably the near important here as they specify how the attribute is laid out in the vertex array. The showtime number specifies the stride, or how many bytes are between each position attribute in the array. The value 0 means that in that location is no data in between. This is currently the example as the position of each vertex is immediately followed by the position of the next vertex. The last parameter specifies the offset, or how many bytes from the first of the array the aspect occurs. Since there are no other attributes, this is 0 too.

It is important to know that this function will store not just the stride and the offset, only also the VBO that is currently bound to GL_ARRAY_BUFFER. That means that you don't take to explicitly bind the correct VBO when the bodily drawing functions are called. This also implies that you tin can use a different VBO for each attribute.

Don't worry if you don't fully understand this nonetheless, equally nosotros'll see how to alter this to add together more attributes soon enough.

          glEnableVertexAttribArray(posAttrib);                  

Last, but not least, the vertex aspect assortment needs to exist enabled.

Vertex Assortment Objects

You can imagine that existent graphics programs utilize many dissimilar shaders and vertex layouts to accept intendance of a broad diverseness of needs and special effects. Irresolute the active shader program is piece of cake enough with a telephone call to glUseProgram, but it would be quite inconvenient if you had to set up up all of the attributes again every time.

Luckily, OpenGL solves that trouble with Vertex Array Objects (VAO). VAOs store all of the links betwixt the attributes and your VBOs with raw vertex data.

A VAO is created in the aforementioned way as a VBO:

          GLuint vao; glGenVertexArrays(i, &vao);                  

To start using it, just bind it:

          glBindVertexArray(vao);                  

Equally presently equally you've bound a sure VAO, every time you call glVertexAttribPointer, that information will be stored in that VAO. This makes switching between unlike vertex data and vertex formats as piece of cake as binding a different VAO! But retrieve that a VAO doesn't store any vertex data past itself, it just references the VBOs you've created and how to retrieve the attribute values from them.

Since only calls subsequently binding a VAO stick to it, make sure that y'all've created and spring the VAO at the first of your programme. Whatever vertex buffers and element buffers spring before information technology volition be ignored.

Drawing

Now that you've loaded the vertex data, created the shader programs and linked the data to the attributes, you're ready to describe the triangle. The VAO that was used to shop the attribute information is already bound, and then you lot don't take to worry about that. All that's left is to merely call glDrawArrays in your master loop:

          glDrawArrays(GL_TRIANGLES, 0, 3);                  

The first parameter specifies the kind of archaic (commonly point, line or triangle), the second parameter specifies how many vertices to skip at the beginning and the last parameter specifies the number of vertices (not primitives!) to process.

When you run your program at present, you should come across the following:

If y'all don't see anything, brand certain that the shaders take compiled correctly, that the program has linked correctly, that the attribute array has been enabled, that the VAO has been bound before specifying the attributes, that your vertex information is correct and that glGetError returns 0. If you lot tin't find the problem, try comparing your code to this sample.

Uniforms

Correct now the white colour of the triangle has been difficult-coded into the shader code, but what if y'all wanted to modify it after compiling the shader? Every bit it turns out, vertex attributes are not the only way to pass data to shader programs. There is some other mode to pass information to the shaders called uniforms. These are essentially global variables, having the aforementioned value for all vertices and/or fragments. To demonstrate how to use these, let'southward make it possible to alter the color of the triangle from the plan itself.

By making the color in the fragment shader a compatible, information technology will end upwards looking like this:

          #version 150 cadre  uniform vec3 triangleColor;  out vec4 outColor;  void main() {     outColor = vec4(triangleColor, ane.0); }                  

The terminal component of the output colour is transparency, which is not very interesting correct now. If you run your plan now you'll meet that the triangle is blackness, considering the value of triangleColor hasn't been set still.

Changing the value of a uniform is just similar setting vertex attributes, y'all starting time accept to catch the location:

          GLint uniColor = glGetUniformLocation(shaderProgram, "triangleColor");                  

The values of uniforms are changed with any of the glUniformXY functions, where Ten is the number of components and Y is the type. Common types are f (float), d (double) and i (integer).

          glUniform3f(uniColor, one.0f, 0.0f, 0.0f);                  

If you run your program now, you'll run across that the triangle is red. To make things a niggling more exciting, endeavor varying the colour with the time by doing something like this in your main loop:

          auto t_start = std::chrono::high_resolution_clock::now();  ...  auto t_now = std::chrono::high_resolution_clock::now(); float time = std::chrono::duration_cast<std::chrono::duration<float>>(t_now - t_start).count();  glUniform3f(uniColor, (sin(time * 4.0f) + 1.0f) / 2.0f, 0.0f, 0.0f);                  

Although this example may not be very exciting, information technology does demonstrate that uniforms are essential for decision-making the behaviour of shaders at runtime. Vertex attributes on the other hand are platonic for describing a single vertex.

See the code if you have any trouble getting this to piece of work.

Calculation some more colors

Although uniforms have their place, colour is something we'd rather similar to specify per corner of the triangle! Permit'southward add together a colour attribute to the vertices to accomplish this.

We'll beginning have to add the extra attributes to the vertex data. Transparency isn't really relevant, so we'll only add together the ruddy, greenish and blueish components:

          bladder vertices[] = {      0.0f,  0.5f, one.0f, 0.0f, 0.0f, // Vertex one: Ruby      0.5f, -0.5f, 0.0f, one.0f, 0.0f, // Vertex 2: Dark-green     -0.5f, -0.5f, 0.0f, 0.0f, 1.0f  // Vertex 3: Blue };                  

Then we have to change the vertex shader to have it as input and laissez passer it to the fragment shader:

          #version 150 core  in vec2 position; in vec3 color;  out vec3 Color;  void main() {     Color = color;     gl_Position = vec4(position, 0.0, 1.0); }                  

And Color is added as input to the fragment shader:

          #version 150 core  in vec3 Color;  out vec4 outColor;  void main() {     outColor = vec4(Color, 1.0); }                  

Make sure that the output of the vertex shader and the input of the fragment shader have the same proper noun, or the shaders will not be linked properly.

Now, we just need to alter the attribute pointer code a fleck to conform for the new X, Y, R, Thousand, B aspect order.

          GLint posAttrib = glGetAttribLocation(shaderProgram, "position"); glEnableVertexAttribArray(posAttrib); glVertexAttribPointer(posAttrib, ii, GL_FLOAT, GL_FALSE,                        five*sizeof(float), 0);  GLint colAttrib = glGetAttribLocation(shaderProgram, "colour"); glEnableVertexAttribArray(colAttrib); glVertexAttribPointer(colAttrib, 3, GL_FLOAT, GL_FALSE,                        v*sizeof(float), (void*)(ii*sizeof(float)));                  

The fifth parameter is set to 5*sizeof(float) now, because each vertex consists of 5 floating bespeak aspect values. The beginning of 2*sizeof(float) for the colour attribute is there because each vertex starts with 2 floating bespeak values for the position that it has to skip over.

And we're done!

You should now have a reasonable understanding of vertex attributes and shaders. If you ran into problems, ask in the comments or have a look at the altered source code.

Element buffers

Right now, the vertices are specified in the order in which they are drawn. If y'all wanted to add together another triangle, you would have to add iii additional vertices to the vertex array. There is a manner to control the social club, which also enables you to reuse existing vertices. This can salvage you a lot of retentiveness when working with real 3D models after, because each point is ordinarily occupied by a corner of three triangles!

An element array is filled with unsigned integers referring to vertices bound to GL_ARRAY_BUFFER. If we just want to draw them in the order they are in now, information technology'll wait similar this:

          GLuint elements[] = {     0, ane, 2 };                  

They are loaded into video retention through a VBO but like the vertex data:

          GLuint ebo; glGenBuffers(1, &ebo);  ...  glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER,     sizeof(elements), elements, GL_STATIC_DRAW);                  

The only thing that differs is the target, which is GL_ELEMENT_ARRAY_BUFFER this time.

To actually make use of this buffer, yous'll have to alter the draw command:

          glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, 0);                  

The get-go parameter is the aforementioned equally with glDrawArrays, only the other ones all refer to the chemical element buffer. The 2nd parameter specifies the number of indices to draw, the third parameter specifies the type of the element data and the last parameter specifies the showtime. The only existent difference is that y'all're talking about indices instead of vertices now.

To see how an element buffer can be beneficial, permit's try cartoon a rectangle using ii triangles. Nosotros'll commencement past doing information technology without an element buffer.

          bladder vertices[] = {     -0.5f,  0.5f, i.0f, 0.0f, 0.0f, // Top-left      0.5f,  0.5f, 0.0f, 1.0f, 0.0f, // Top-correct      0.5f, -0.5f, 0.0f, 0.0f, 1.0f, // Lesser-right       0.5f, -0.5f, 0.0f, 0.0f, 1.0f, // Lesser-correct     -0.5f, -0.5f, 1.0f, 1.0f, 1.0f, // Bottom-left     -0.5f,  0.5f, 1.0f, 0.0f, 0.0f  // Top-left };                  

By calling glDrawArrays instead of glDrawElements like earlier, the element buffer will simply be ignored:

          glDrawArrays(GL_TRIANGLES, 0, vi);                  

The rectangle is rendered equally information technology should, but the repetition of vertex information is a waste of memory. Using an element buffer allows you to reuse information:

          float vertices[] = {     -0.5f,  0.5f, 1.0f, 0.0f, 0.0f, // Top-left      0.5f,  0.5f, 0.0f, 1.0f, 0.0f, // Top-correct      0.5f, -0.5f, 0.0f, 0.0f, 1.0f, // Bottom-correct     -0.5f, -0.5f, 1.0f, 1.0f, 1.0f  // Bottom-left };  ...  GLuint elements[] = {     0, 1, 2,     2, 3, 0 };  ...  glDrawElements(GL_TRIANGLES, six, GL_UNSIGNED_INT, 0);                  

The chemical element buffer however specifies half dozen vertices to grade 2 triangles like before, simply at present we're able to reuse vertices! This may not seem like much of a big deal at this point, but when your graphics application loads many models into the relatively small-scale graphics memory, element buffers will exist an important surface area of optimization.

If y'all run into trouble, have a look at the total source code.

This chapter has covered all of the core principles of cartoon things with OpenGL and it'due south absolutely essential that y'all accept a good understanding of them before continuing. Therefore I propose y'all to do the exercises below before diving into textures.

Exercises

  • Alter the vertex shader so that the triangle is upside down. (Solution)
  • Invert the colors of the triangle past altering the fragment shader. (Solution)
  • Alter the program then that each vertex has only i color value, determining the shade of grey. (Solution)

langdonthissinat1964.blogspot.com

Source: https://open.gl/drawing