完整代码见:zaizai77/Cherno-OpenGL: OpenGL 小白学习之路
glm::mat4 trans;
trans = glm::rotate(trans, glm::radians(90.0f), glm::vec3(0.0, 0.0, 1.0));
trans = glm::scale(trans, glm::vec3(0.5, 0.5, 0.5));
我们把箱子在每个轴都缩放到0.5倍,然后沿z轴旋转90度
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 transform;
void main()
{
gl_Position = transform * vec4(aPos, 1.0f);
TexCoord = vec2(aTexCoord.x, 1.0 - aTexCoord.y);
}
创建一个 uniform 变量用来传值
glm::mat4 trans;
trans = glm::translate(trans, glm::vec3(0.5f, -0.5f, 0.0f));
trans = glm::rotate(trans, (float)glfwGetTime(), glm::vec3(0.0f, 0.0f, 1.0f));
会随时间旋转
坐标系统
- 局部空间(Local Space,或者称为物体空间(Object Space))
- 世界空间(World Space)
- 观察空间(View Space,或者称为视觉空间(Eye Space))
- 裁剪空间(Clip Space)
- 屏幕空间(Screen Space)
这就是一个顶点在最终被转化为片段之前需要经历的所有不同状态。
- 局部坐标是对象相对于局部原点的坐标,也是物体起始的坐标。
- 下一步是将局部坐标变换为世界空间坐标,世界空间坐标是处于一个更大的空间范围的。这些坐标相对于世界的全局原点,它们会和其它物体一起相对于世界的原点进行摆放。
- 接下来我们将世界坐标变换为观察空间坐标,使得每个坐标都是从摄像机或者说观察者的角度进行观察的。
- 坐标到达观察空间之后,我们需要将其投影到裁剪坐标。裁剪坐标会被处理至-1.0到1.0的范围内,并判断哪些顶点将会出现在屏幕上。
- 最后,我们将裁剪坐标变换为屏幕坐标,我们将使用一个叫做视口变换(Viewport Transform)的过程。视口变换将位于-1.0到1.0范围的坐标变换到由glViewport函数所定义的坐标范围内。最后变换出来的坐标将会送到光栅器,将其转化为片段。
由投影矩阵创建的观察箱(Viewing Box)被称为平截头体(Frustum),每个出现在平截头体范围内的坐标都会最终出现在用户的屏幕上。将特定范围内的坐标转化到标准化设备坐标系的过程(而且它很容易被映射到2D观察空间坐标)被称之为投影(Projection),因为使用投影矩阵能将3D坐标投影(Project)到很容易映射到2D的标准化设备坐标系中。
一旦所有顶点被变换到裁剪空间,最终的操作——透视除法(Perspective Division)将会执行,在这个过程中我们将位置向量的x,y,z分量分别除以向量的齐次w分量;透视除法是将4D裁剪空间坐标变换为3D标准化设备坐标的过程。这一步会在每一个顶点着色器运行的最后被自动执行。
在这一阶段之后,最终的坐标将会被映射到屏幕空间中(使用glViewport中的设定),并被变换成片段。
正射投影
创建一个正射投影矩阵,使用 glm::ortho
glm::ortho(0.0f, 800.0f, 0.0f, 600.0f, 0.1f, 100.0f);
前两个参数指定了平截头体的左右坐标,第三和第四参数指定了平截头体的底部和顶部。通过这四个参数我们定义了近平面和远平面的大小,然后第五和第六个参数则定义了近平面和远平面的距离。这个投影矩阵会将处于这些x,y,z值范围内的坐标变换为标准化设备坐标。
透视投影
透视投影矩阵将给定的平截头体范围映射到裁剪空间,除此之外还修改每个顶点坐标的 w 值,从而使距离越远的顶点坐标 w 分量越大。被变换到裁剪空间的坐标都会在-w到w的范围之间(任何大于这个范围的坐标都会被裁剪掉)。一旦坐标在裁剪空间内之后,透视除法就会被应用到裁剪空间坐标上:
顶点坐标的每个分量都会除以它的w分量,距离观察者越远顶点坐标就会越小
创建一个透视投影矩阵:glm::perspective
glm::mat4 proj = glm::perspective(glm::radians(45.0f), (float)width/(float)height, 0.1f, 100.0f);
它的第一个参数定义了fov的值,它表示的是视野(Field of View),并且设置了观察空间的大小。如果想要一个真实的观察效果,它的值通常设置为45.0f。第二个参数设置了宽高比,由视口的宽除以高所得。第三和第四个参数设置了平截头体的近和远平面。我们通常设置近距离为0.1f,而远距离设为100.0f。所有在近平面和远平面内且处于平截头体内的顶点都会被渲染。
将变换矩阵结合在一起
我们为上述的每一个步骤都创建了一个变换矩阵:模型矩阵、观察矩阵和投影矩阵。一个顶点坐标将会根据以下过程被变换到裁剪坐标:
注意矩阵运算的顺序是相反的(记住我们需要从右往左阅读矩阵的乘法)。最后的顶点应该被赋值到顶点着色器中的gl_Position,OpenGL将会自动进行透视除法和裁剪。
进入3D
右手坐标系(Right-handed System)
按照惯例,OpenGL是一个右手坐标系。简单来说,就是正x轴在你的右手边,正y轴朝上,而正z轴是朝向后方的。想象你的屏幕处于三个轴的中心,则正z轴穿过你的屏幕朝向你。坐标系画起来如下:
// create transformations
glm::mat4 model = glm::mat4(1.0f); // make sure to initialize matrix to identity matrix first
glm::mat4 view = glm::mat4(1.0f);
glm::mat4 projection = glm::mat4(1.0f);
model = glm::rotate(model, glm::radians(-55.0f), glm::vec3(1.0f, 0.0f, 0.0f));
view = glm::translate(view, glm::vec3(0.0f, 0.0f, -3.0f));
projection = glm::perspective(glm::radians(45.0f), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f);
Z 缓冲
OpenGL存储它的所有深度信息于一个Z缓冲(Z-buffer)中,也被称为深度缓冲(Depth Buffer)。
深度值存储在每个片段里面(作为片段的z值),当片段想要输出它的颜色时,OpenGL会将它的深度值和z缓冲进行比较,如果当前的片段在其它片段之后,它将会被丢弃,否则将会覆盖。这个过程称为深度测试(Depth Testing),它是由OpenGL自动完成的。
告诉 OpenGL 开启深度测试,它默认是关闭的。glEnable和glDisable函数允许我们启用或禁用某个OpenGL功能。这个功能会一直保持启用/禁用状态,直到另一个调用来禁用/启用它。
glEnable(GL_DEPTH_TEST);
就像清除颜色缓冲一样,我们可以通过在glClear函数中指定DEPTH_BUFFER_BIT位来清除深度缓冲:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
实现 10 个旋转的正方体:
代码:
#include "TestTenRotateCube.h"
#include "Render.h"
#include <glm/ext/matrix_clip_space.hpp>
#include <glm/ext/matrix_transform.hpp>
#include <GLFW/glfw3.h>
namespace test {
TestTenRotateCube::TestTenRotateCube() {
float vertices[] = {
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f,
0.5f, -0.5f, -0.5f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 0.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
0.5f, -0.5f, 0.5f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f, 1.0f, 1.0f,
-0.5f, 0.5f, 0.5f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
-0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
-0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
-0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
0.5f, -0.5f, -0.5f, 1.0f, 1.0f,
0.5f, -0.5f, 0.5f, 1.0f, 0.0f,
0.5f, -0.5f, 0.5f, 1.0f, 0.0f,
-0.5f, -0.5f, 0.5f, 0.0f, 0.0f,
-0.5f, -0.5f, -0.5f, 0.0f, 1.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f,
0.5f, 0.5f, -0.5f, 1.0f, 1.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
0.5f, 0.5f, 0.5f, 1.0f, 0.0f,
-0.5f, 0.5f, 0.5f, 0.0f, 0.0f,
-0.5f, 0.5f, -0.5f, 0.0f, 1.0f
};
unsigned int indices[] = {
0,1,2,
3,4,5,
6,7,8,
9,10,11,
12,13,14,
15,16,17,
18,19,20,
21,22,23,
24,25,26,
27,28,29,
30,31,32,
33,34,35
};
GLCall(glEnable(GL_BLEND));
GLCall(glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA));
m_VAO = std::make_unique<VertexArray>();
m_VertexBuffer = std::make_unique<VertexBuffer>(vertices, sizeof(vertices));
VertexBufferLayout layout;
layout.Push<float>(3);
layout.Push<float>(2);
m_VAO->AddBuffer(*m_VertexBuffer, layout);
m_IndexBuffer = std::make_unique<IndexBuffer>(indices, 36);
m_Shader = std::make_unique<Shader>("res/shaders/TenRotateCube.shader");
m_Shader->Bind();
m_Shader->SetUniform1i("u_Texture1", 0);
m_Shader->SetUniform1i("u_Texture2", 1);
m_Texture0 = std::make_unique<Texture>("res/textures/14fnm.png");
m_Texture1 = std::make_unique<Texture>("res/textures/smileface.png");
}
TestTenRotateCube::~TestTenRotateCube() {
}
void TestTenRotateCube::OnUpdate(float deltaTime) {
}
void TestTenRotateCube::OnRender() {
GLCall(glClearColor(0.0f, 0.0f, 0.0f, 1.0f));
GLCall(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT));
glm::vec3 cubePositions[] = {
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(2.0f, 5.0f, -15.0f),
glm::vec3(-1.5f, -2.2f, -2.5f),
glm::vec3(-3.8f, -2.0f, -12.3f),
glm::vec3(2.4f, -0.4f, -3.5f),
glm::vec3(-1.7f, 3.0f, -7.5f),
glm::vec3(1.3f, -2.0f, -2.5f),
glm::vec3(1.5f, 2.0f, -2.5f),
glm::vec3(1.5f, 0.2f, -1.5f),
glm::vec3(-1.3f, 1.0f, -1.5f)
};
constexpr Render render;
m_Texture0->Bind(0);
m_Texture1->Bind(1);
{
//glm::mat4 model = glm::mat4(1.0f);
glm::mat4 view = glm::mat4(1.0f);
glm::mat4 projection = glm::mat4(1.0f);
//model = glm::rotate(model, (float)glfwGetTime(), glm::vec3(0.5f, 1.0f, 1.0f));
view = glm::translate(view, glm::vec3(0.0f, 0.0f, -3.0f));
projection = glm::perspective(glm::radians(45.0f), 960.0f / 540.0f, 0.1f, 100.0f);
m_Shader->Bind();
//m_Shader->SetUniformMat4f("model", model);
m_Shader->SetUniformMat4f("view", view);
m_Shader->SetUniformMat4f("projection", projection);
for (unsigned int i = 0; i < 10; i++)
{
glm::mat4 model = glm::mat4(1.0f);
model = glm::translate(model, cubePositions[i]);
float angle = 20.0f * glfwGetTime();
model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f));
m_Shader->SetUniformMat4f("model", model);
GLCall(render.Draw(*m_VAO, *m_IndexBuffer, *m_Shader));
}
//GLCall(render.Draw(*m_VAO, *m_IndexBuffer, *m_Shader));
}
}
void TestTenRotateCube::OnImGuiRender() {
}
}
#shader vertex
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec2 aTexCoord;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main() {
gl_Position = projection * view * model * vec4(aPos, 1.0f);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
#shader fragment
#version 330 core
out vec4 FragColor;
in vec2 TexCoord;
uniform sampler2D u_Texture1;
uniform sampler2D u_Texture2;
void main() {
FragColor = mix(texture(u_Texture1, TexCoord), texture(u_Texture2, TexCoord), 0.2);
}
摄像机
我们可以通过把场景中的所有物体往相反方向移动的方式来模拟出摄像机,产生一种我们在移动的感觉,而不是场景在移动。
当我们讨论摄像机/观察空间(Camera/View Space)的时候,是在讨论以摄像机的视角作为场景原点时场景中所有的顶点坐标:观察矩阵把所有的世界坐标变换为相对于摄像机位置与方向的观察坐标。
定义一个摄像机的位置
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
正z轴是从屏幕指向你的,如果我们希望摄像机向后移动,我们就沿着z轴的正方向移动。
摄像机方向
glm::vec3 cameraTarget = glm::vec3(0.0f, 0.0f, 0.0f);
glm::vec3 cameraDirection = glm::normalize(cameraPos - cameraTarget);
方向向量(Direction Vector)并不是最好的名字,因为它实际上指向从它到目标向量的相反方向
右轴
glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f);
glm::vec3 cameraRight = glm::normalize(glm::cross(up, cameraDirection)); 叉乘
上轴
glm::vec3 cameraUp = glm::cross(cameraDirection, cameraRight);
Look At
使用矩阵的好处之一是如果你使用3个相互垂直(或非线性)的轴定义了一个坐标空间,你可以用这3个轴外加一个平移向量来创建一个矩阵,并且你可以用这个矩阵乘以任何向量来将其变换到那个坐标空间。这正是LookAt矩阵所做的,现在我们有了3个相互垂直的轴和一个定义摄像机空间的位置坐标,我们可以创建我们自己的LookAt矩阵了:
其中R是右向量,U是上向量,D是方向向量P是摄像机位置向量。注意,位置向量是相反的,因为我们最终希望把世界平移到与我们自身移动的相反方向。把这个LookAt矩阵作为观察矩阵可以很高效地把所有世界坐标变换到刚刚定义的观察空间。LookAt矩阵就像它的名字表达的那样:它会创建一个看着(Look at)给定目标的观察矩阵。
glm::mat4 view;
view = glm::lookAt(glm::vec3(0.0f, 0.0f, 3.0f),
glm::vec3(0.0f, 0.0f, 0.0f),
glm::vec3(0.0f, 1.0f, 0.0f));
glm::LookAt函数需要一个位置、目标和上向量。它会创建一个和在上一节使用的一样的观察矩阵。
float radius = 10.0f;
float camX = sin(glfwGetTime()) * radius;
float camZ = cos(glfwGetTime()) * radius;
glm::mat4 view;
view = glm::lookAt(glm::vec3(camX, 0.0, camZ), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
旋转摄像机
glm::vec3 cameraPos = glm::vec3(0.0f, 0.0f, 3.0f);
glm::vec3 cameraFront = glm::vec3(0.0f, 0.0f, -1.0f);
glm::vec3 cameraUp = glm::vec3(0.0f, 1.0f, 0.0f);
view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
void processInput(GLFWwindow *window)
{
...
float cameraSpeed = 0.05f; // adjust accordingly
if (glfwGetKey(window, GLFW_KEY_W) == GLFW_PRESS)
cameraPos += cameraSpeed * cameraFront;
if (glfwGetKey(window, GLFW_KEY_S) == GLFW_PRESS)
cameraPos -= cameraSpeed * cameraFront;
if (glfwGetKey(window, GLFW_KEY_A) == GLFW_PRESS)
cameraPos -= glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
if (glfwGetKey(window, GLFW_KEY_D) == GLFW_PRESS)
cameraPos += glm::normalize(glm::cross(cameraFront, cameraUp)) * cameraSpeed;
}
自由移动摄像机
wasd 移动摄像机
增加鼠标移动和滚轮远近控制:
//添加callback
void TestTenRotateCube::OnStart(GLFWwindow* window) {
m_Window = window;
glfwSetWindowUserPointer(m_Window, reinterpret_cast<void*>(this));
glfwSetCursorPosCallback(m_Window, test::mouse_callback);
glfwSetScrollCallback(m_Window, test::scroll_callback);
// tell GLFW to capture our mouse
glfwSetInputMode(window, GLFW_CURSOR, GLFW_CURSOR_DISABLED);
}
// 具体的callback 函数
//--------------------------------------------------------------------------
void mouse_callback(GLFWwindow* window, double xposIn, double yposIn) {
TestTenRotateCube* wind = reinterpret_cast<TestTenRotateCube*>(glfwGetWindowUserPointer(window));
wind->mouse_callback_fun(window, xposIn, yposIn);
}
void TestTenRotateCube::mouse_callback_fun(GLFWwindow* window, double xposIn, double yposIn) {
float xpos = static_cast<float>(xposIn);
float ypos = static_cast<float>(yposIn);
if (firstMouse) {
lastX = xpos;
lastY = ypos;
firstMouse = false;
}
float xoffset = xpos - lastX;
float yoffset = lastY - ypos; // reversed since y-coordinates go from bottom to top
lastX = xpos;
lastY = ypos;
float sensitivity = 0.1f; // change this value to your liking
xoffset *= sensitivity;
yoffset *= sensitivity;
yaw += xoffset;
pitch += yoffset;
// make sure that when pitch is out of bounds, screen doesn't get flipped
if (pitch > 89.0f)
pitch = 89.0f;
if (pitch < -89.0f)
pitch = -89.0f;
glm::vec3 front;
front.x = cos(glm::radians(yaw)) * cos(glm::radians(pitch));
front.y = sin(glm::radians(pitch));
front.z = sin(glm::radians(yaw)) * cos(glm::radians(pitch));
cameraFront = glm::normalize(front);
}
void scroll_callback(GLFWwindow* window, double xoffset, double yoffset) {
TestTenRotateCube* wind = reinterpret_cast<TestTenRotateCube*>(glfwGetWindowUserPointer(window));
wind->scroll_callback_fun(window, xoffset, yoffset);
}
// glfw: whenever the mouse scroll wheel scrolls, this callback is called
// ----------------------------------------------------------------------
void TestTenRotateCube::scroll_callback_fun(GLFWwindow* window, double xoffset, double yoffset) {
fov -= (float)yoffset;
if (fov < 1.0f) {
fov = 1.0f;
}
if (fov > 45.0f) {
fov = 45.0f;
}
}
视野(Field of View)或fov定义了我们可以看到场景中多大的范围。当视野变小时,场景投影出来的空间就会减小,产生放大(Zoom In)了的感觉。
摄像机类:
LearnOpenGL 官方的代码:
#ifndef CAMERA_H
#define CAMERA_H
#include <glad/glad.h>
#include <glm/glm.hpp>
#include <glm/gtc/matrix_transform.hpp>
// Defines several possible options for camera movement. Used as abstraction to stay away from window-system specific input methods
enum Camera_Movement {
FORWARD,
BACKWARD,
LEFT,
RIGHT
};
// Default camera values
const float YAW = -90.0f;
const float PITCH = 0.0f;
const float SPEED = 2.5f;
const float SENSITIVITY = 0.1f;
const float ZOOM = 45.0f;
// An abstract camera class that processes input and calculates the corresponding Euler Angles, Vectors and Matrices for use in OpenGL
class Camera
{
public:
// camera Attributes
glm::vec3 Position;
glm::vec3 Front;
glm::vec3 Up;
glm::vec3 Right;
glm::vec3 WorldUp;
// euler Angles
float Yaw;
float Pitch;
// camera options
float MovementSpeed;
float MouseSensitivity;
float Zoom;
// constructor with vectors
Camera(glm::vec3 position = glm::vec3(0.0f, 0.0f, 0.0f), glm::vec3 up = glm::vec3(0.0f, 1.0f, 0.0f), float yaw = YAW, float pitch = PITCH) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVITY), Zoom(ZOOM)
{
Position = position;
WorldUp = up;
Yaw = yaw;
Pitch = pitch;
updateCameraVectors();
}
// constructor with scalar values
Camera(float posX, float posY, float posZ, float upX, float upY, float upZ, float yaw, float pitch) : Front(glm::vec3(0.0f, 0.0f, -1.0f)), MovementSpeed(SPEED), MouseSensitivity(SENSITIVITY), Zoom(ZOOM)
{
Position = glm::vec3(posX, posY, posZ);
WorldUp = glm::vec3(upX, upY, upZ);
Yaw = yaw;
Pitch = pitch;
updateCameraVectors();
}
// returns the view matrix calculated using Euler Angles and the LookAt Matrix
glm::mat4 GetViewMatrix()
{
return glm::lookAt(Position, Position + Front, Up);
}
// processes input received from any keyboard-like input system. Accepts input parameter in the form of camera defined ENUM (to abstract it from windowing systems)
void ProcessKeyboard(Camera_Movement direction, float deltaTime)
{
float velocity = MovementSpeed * deltaTime;
if (direction == FORWARD)
Position += Front * velocity;
if (direction == BACKWARD)
Position -= Front * velocity;
if (direction == LEFT)
Position -= Right * velocity;
if (direction == RIGHT)
Position += Right * velocity;
}
// processes input received from a mouse input system. Expects the offset value in both the x and y direction.
void ProcessMouseMovement(float xoffset, float yoffset, GLboolean constrainPitch = true)
{
xoffset *= MouseSensitivity;
yoffset *= MouseSensitivity;
Yaw += xoffset;
Pitch += yoffset;
// make sure that when pitch is out of bounds, screen doesn't get flipped
if (constrainPitch)
{
if (Pitch > 89.0f)
Pitch = 89.0f;
if (Pitch < -89.0f)
Pitch = -89.0f;
}
// update Front, Right and Up Vectors using the updated Euler angles
updateCameraVectors();
}
// processes input received from a mouse scroll-wheel event. Only requires input on the vertical wheel-axis
void ProcessMouseScroll(float yoffset)
{
Zoom -= (float)yoffset;
if (Zoom < 1.0f)
Zoom = 1.0f;
if (Zoom > 45.0f)
Zoom = 45.0f;
}
private:
// calculates the front vector from the Camera's (updated) Euler Angles
void updateCameraVectors()
{
// calculate the new Front vector
glm::vec3 front;
front.x = cos(glm::radians(Yaw)) * cos(glm::radians(Pitch));
front.y = sin(glm::radians(Pitch));
front.z = sin(glm::radians(Yaw)) * cos(glm::radians(Pitch));
Front = glm::normalize(front);
// also re-calculate the Right and Up vector
Right = glm::normalize(glm::cross(Front, WorldUp)); // normalize the vectors, because their length gets closer to 0 the more you look up or down which results in slower movement.
Up = glm::normalize(glm::cross(Right, Front));
}
};
#endif
要怎么去使用呢?我自己的相关内容是写到了一个单独的test 类中,需要的时候去绘制,官方的代码都是直接写在同一个脚本中的
参考文章:变换 - LearnOpenGL CN