keywords: OpenGL, buffer data, glBufferData, memory type

Memory Type
  • system memory: CPU-side memory, usually called RAM.
  • video memory: memory on video card (Mainstream is PCI Express), usually called VRAM.
  • AGP memory: Accelerated Graphics Port, the opposite of PCI Express, obsoleted connector on PC.
  • shared memory:
    • on integrated GPUs, shared memory is just the system memory;
    • take OpenGL for example, call glBufferStorage with GL_CLIENT_STORAGE_BIT to allocate CPU-side memory.
    • take D3D12 as an example, D3D12_HEAP_TYPE_CUSTOM heap is the shared memory (on NUMA platform set MemoryPoolPreferenceas D3D12_MEMORY_POOL_L0);
    • for Vulkan, call VmaAllocationCreateInfo() with VK_MEMORY_PROPERTY_HOST_CACHED_BIT which is set to VmaAllocationCreateInfo::preferredFlags.

Reference:
System memory,AGP memory和video memory
https://blog.csdn.net/xiajun07061225/article/details/7288365

GL_STATIC_DRAW, GL_STREAM_DRAW, GL_DYNAMIC_DRAW

Difference (credits to redditor: RowYourUpboat):

  • GL_STATIC_DRAW: basically means “I will load this vertex data once and then never change it.” This would include any static props or level geometry, but also animated models/particles if you are doing all the animation with vertex shaders on the GPU (modern engines with skeletal animation do this, for example).
  • GL_STREAM_DRAW: basically means “I am planning to change this vertex data basically every frame.” If you are manipulating the vertices a lot on the CPU, and it’s not feasible to use shaders instead, you probably want to use this one. Sprites or particles with complex behavior are often best served as STREAM vertices. While STATIC+shaders is preferable for animated geometry, modern hardware can spew incredible amounts of vertex data from the CPU to the GPU every frame without breaking a sweat, so you will generally not notice the performance impact.
  • GL_DYNAMIC_DRAW: basically means “I may need to occasionally update this vertex data, but not every frame.” This is the least common one. It’s not really suited for most forms of animation since those usually require very frequent updates. Animations where the vertex shader interpolates between occasional keyframe updates are one possible case. A game with Minecraft-style dynamic terrain might try using DYNAMIC, since the terrain changes occur less frequently than every frame. DYNAMIC also tends to be useful in more obscure scenarios, such as if you’re batching different chunks of model data in the same vertex buffer, and you occasionally need to move them around.

OpenGL缓冲区对象允许usage标示符取以下9种值(credits to kunluo - CSDN):

  • GL_STATIC_DRAW, GL_STATIC_READ, GL_STATIC_COPY
  • GL_DYNAMIC_DRAW, GL_DYNAMIC_READ, GL_DYNAMIC_COPY
  • GL_STREAM_DRAW, GL_STREAM_READ, GL_STREAM_COPY

区别:

  • “static"意味着VBO中的数据不会被改变(一次修改,多次使用);
  • “dynamic"意味着数据可以被频繁修改(多次修改,多次使用);
  • “stream"意味着数据每帧都不同(一次修改,一次使用);
  • “Draw"意味着数据将会被送往GPU进行绘制,“read"意味着数据会被用户的应用读取,“copy"意味着数据会被用于绘制和读取。

注意在使用VBO时,只有draw是有效的,而copy和read主要将会在像素缓冲区(PBO)和帧缓冲区(FBO)中发挥作用。
系统会根据usage标示符为缓冲区对象分配最佳的存储位置,比如系统会为GL_STATIC_DRAW和GL_STREAM_DRAW分配显存,GL_DYNAMIC_DRAW分配AGP,以及任何_READ_相关的缓冲区对象都会被存储到系统或者AGP中因为这样数据更容易读写。

References:
How to choose between GL_STREAM_DRAW or GL_DYNAMIC_DRAW
https://stackoverflow.com/a/63445983/1645289

glBufferStorage

Credits to redditor:

  • Use glBufferStorage with flags = 0 to allocate local GPU memory. You can then call glBufferStorage with GL_CLIENT_STORAGE_BIT to allocate CPU-side memory (with appropriate map read/write bits), and use glCopyBufferSubData to copy from one to the other. You can also copy from GPU buffer to GPU buffer, in case your data is generated by eg. a compute shader that writes to buffer.
  • If you use glBufferData, then what the implementation does with your memory is under-specified. Guaranteed it’s not doing anything like “avoiding a local CPU copy of your data”.
  • glMapBuffer does not allow you to directly modify GPU memory. It either causes the buffer to be stored on the CPU, or it’s giving you a temporary copy of the buffer (on the CPU) which it copies back to the GPU at the end. Also, if it did give you a direct GPU pointer, it would either hard sync the GPU, or force you to be very careful not to overwrite parts of the buffer currently being processed. If you’re explicitly implementing CPU/GPU copies yourself, then you have to implement this double-buffering yourself, which means having 1 GPU buffer and N CPU buffers (for N-buffering)

Documents:
OpenGL Performance Tips: Atomic Counter Buffers versus Shader Storage Buffer Objects

glMapBuffer vs. glBufferSubData

Quoted from [Is it better glBufferSubData or glMapBuffer - StackOverflow]:
The good thing about glMapBuffer is that you dont need to copy the data first in an array and then use glBufferSubData to fill the opengl buffer. With glMapBuffer, you can copy the data directly to part of memory which opengl will fetch to GPU when it is necessary. From my point of view, there glMapBuffer should be faster when you want to fill a big buffer which is going to be updated frequently. Also, how you are copying the data into the buffer between glMapBuffer and glUnmapBuffer is also important.

If show us the code which you are using the glMapBuffer and how big is the data, then we can judge easier. Anyway, in the end measurements can show you which one is better.

Documents:
Implicit synchronization - khronos.org

GL_ARRAY_BUFFER vs. GL_ELEMENT_ARRAY_BUFFER

This has mostly historic reasons. Back when there were no VBOs, the pointers specified with glVertexPointer and similar were not “associated” with a OpenGL object of any kind. When VBOs got introduced this behavior carried over into the semantics of VBOs, which required a different buffer target for indices and attributes.

With the introduction of generic vertex attributes such an association functionality has been added.

Today it’s mostly of a hint to the OpenGL implementation to know, in which way the data is going to be addressed, to optimize the data flow accordingly. But it also functions well as a mental reminder to the programmer, what’s currently dealt with.

Origin:
https://stackoverflow.com/a/15095302/1645289

How to upload vertices which addressed by VAO to GPU memory

Quoted from Mesh-Voxelization:

struct {
    Eigen::Matrix<float, -1, -1> V;		//vertex buffer
    Eigen::Matrix<uint32_t, -1, -1> F;	//triangle index buffer
} mesh;

struct {
    GLuint program;
    GLuint id_vao;
    GLuint id_vbo_position;
    GLuint id_ebo;
} vao;

void init_vao()
{
	/* interpolate the vertex (attributes and indices) into 'mesh'
	using geometry library such as Assimp or glTF. 
	*/
	load_mesh("C:/monkey.obj", mesh)

	//vao
	GLuint shaders[3] = { vs, gs, fs };
	create_program(vao_voxelization.program, shaders, 3);
	glUseProgram(vao_voxelization.program);
	glGenVertexArrays(1, &vao_voxelization.id_vao);
	glBindVertexArray(vao_voxelization.id_vao);

	//position (buffer object)
	glGenBuffers(1, &vao_voxelization.id_vbo_position);
	glBindBuffer(GL_ARRAY_BUFFER, vao_voxelization.id_vbo_position);
	glBufferData(GL_ARRAY_BUFFER, mesh.F.cols() * 3 * sizeof(GLfloat), mesh.V.data(), GL_STATIC_DRAW);
	glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 0, 0);
	glEnableVertexAttribArray(0);
	glBindBuffer(GL_ARRAY_BUFFER, 0);

	//elements (buffer object)
	glGenBuffers(1, &vao_voxelization.id_ebo);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vao_voxelization.id_ebo);
	glBufferData(GL_ELEMENT_ARRAY_BUFFER, mesh.F.cols() * 3 * sizeof(GLuint), mesh.F.data(), GL_STATIC_DRAW);
	glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
VAO (Vertex Array Objects) vs VBO (Vertex Buffer Objects)

Difference:

  • Vertex Array Objects (VAOs) are conceptually nothing but thin state wrappers.
  • Vertex Buffer Objects (VBOs) store actual data.

Origin: Use of Vertex Array Objects and Vertex Buffer Objects - StackOverflow

glActiveTexture vs. glBindTexture

Imagine the GPU like some paint processing plant.

There are a number of tanks, which delivers dye to some painting machine. In the painting machine the dye is then applied to the object. Those tanks are the texture units

Those tanks can be equipped with different kinds of dye. Each kind of dye requires some other kind of solvent. The “solvent” is the texture target. For convenience each tank is connected to some solvent supply, and but only one kind of solvent can be used at a time in each tank. So there’s a valve/switch TEXTURE_CUBE_MAP, TEXTURE_3D, TEXTURE_2D, TEXTURE_1D. You can fill all the dye types into the tank at the same time, but since only one kind of solvent goes in, it will “dilute” only the kind of dye matching. So you can have each kind of texture being bound, but the binding with the “most important” solvent will actually go into the tank and mix with the kind of dye it belongs to.

And then there’s the dye itself, which comes from a warehouse and is filled into the tank by “binding” it. That’s your texture.

Origin: Differences and relationship between glActiveTexture and glBindTexture
https://stackoverflow.com/a/8868942/1645289

glBindImageTexture vs. glBindTexture
layout(binding = 0) uniform sampler2D img_input;

That declares a sampler, which gets its data from a texture object. The binding of 0 (you can set that in the shader in GLSL 4.20) says that the 2D texture bound to texture image unit 0 (via glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, ...);) is the texture that will be used for this sampler.

Samplers use the entire texture, including all mipmap levels and array layers. Most texture sampling functions use normalized texture coordinates ([0, 1] map to the size of the texture). Most texture sampling functions also respect filtering properties and other sampling parameters.

layout (binding = 0, rgba32f) uniform image2D img_input;

This declares an image, which represents a single image from a texture. Textures can have multiple images: mipmap levels, array layers, etc. When you use glBindImageTexture, you are binding a single image from a texture.

Images and samplers are completely separate. They have their own set of binding indices; it’s perfectly valid to bind a texture to GL_TEXTURE0 and an image from a different texture to image binding 0. Using texture functions for the associated sampler will read from what is bound to GL_TEXTURE0, while image functions on the associated image variable will read from the image bound to image binding 0.

Image access ignores all sampling parameters. Image accessing functions always use integer texel coordinates.

Samplers can only read data from textures; image variables can read and/or write data, as well as perform atomic operations on them. Of course, writing data from shaders requires special care, specifically when someone goes to read that data.

Origin: What is the difference between glBindImageTexture() and glBindTexture()?
https://stackoverflow.com/a/37140611/1645289


He who thinks too much about every step he takes will always stay on one leg. -Chinese Proverbs