OpenGL Texture C++ 预览Camera视频

news2024/12/30 2:36:41

OpenGL是一个图形API,并不是一个独立的平台。包含了一系列可以操作图形、图像的函数。基于Texture纹理强大的功能,本篇文章实现Android OpenGL Texture C++ 预览Camera视频流的功能。

       项目github地址:https://github.com/wangyongyao1989/WyFFmpeg

        

一、代码实现步骤及图示预览:

二、Camera数据获取:

        Android Camera可以获取图片数据的视频流信息。

       1、 打开Camera后,获取ImageReader读取视频对应的相片数据。

    /**
     * Opens the camera.
     */
    @SuppressLint({"WrongConstant", "MissingPermission"})
    public void openCamera() {
        if (checkSelfPermission(mContext, Manifest.permission.CAMERA) 
                != PackageManager.PERMISSION_GRANTED) {
            return;
        }

        mImageReader = ImageReader.newInstance(mPreviewSize.getWidth()
                , mPreviewSize.getHeight()
                , ImageFormat.YUV_420_888, IMAGE_BUFFER_SIZE);
        mImageReader.setOnImageAvailableListener(mVideoCapture
                , mBackgroundHandler);

        Log.i(TAG, "openCamera");

        CameraManager manager = (CameraManager) 
                mContext.getSystemService(Context.CAMERA_SERVICE);
        try {
            if (!mCameraOpenCloseLock.tryAcquire(2500
                    , TimeUnit.MILLISECONDS)) {
                throw new RuntimeException("Time out waiting " +
                        "to lock camera opening.");
            }
            manager.openCamera(mCameraId, mStateCallback
                    , mBackgroundHandler);
        } catch (CameraAccessException e) {
            Log.e(TAG, "Cannot " +
                    "access the camera " + e);
        } catch (InterruptedException e) {
            throw new RuntimeException("Interrupted while " +
                    "trying to lock camera opening.", e);
        }
    }

        2、在监听ImageReader.OnImageAvailableListener中读取ImageReader并转换成YUV_420_888的数据。

  @Override
    public void onImageAvailable(ImageReader imageReader) {

        Image image = imageReader.acquireLatestImage();
        if (image != null) {
            if (mPreviewFrameHandler != null) {
                mPreviewFrameHandler.onPreviewFrame(YUV_420_888_data(image), image.getWidth(), image.getHeight());
            }

            image.close();
        }
    }

    private static byte[] YUV_420_888_data(Image image) {
        final int imageWidth = image.getWidth();
        final int imageHeight = image.getHeight();
        final Image.Plane[] planes = image.getPlanes();
        byte[] data = new byte[imageWidth * imageHeight *
                ImageFormat.getBitsPerPixel(ImageFormat.YUV_420_888) / 8];
        int offset = 0;

        for (int plane = 0; plane < planes.length; ++plane) {
            final ByteBuffer buffer = planes[plane].getBuffer();
            final int rowStride = planes[plane].getRowStride();
            // Experimentally, U and V planes have |pixelStride| = 2, which
            // essentially means they are packed.
            final int pixelStride = planes[plane].getPixelStride();
            final int planeWidth = (plane == 0) ? imageWidth : imageWidth / 2;
            final int planeHeight = (plane == 0) ? imageHeight : imageHeight / 2;
            if (pixelStride == 1 && rowStride == planeWidth) {
                // Copy whole plane from buffer into |data| at once.
                buffer.get(data, offset, planeWidth * planeHeight);
                offset += planeWidth * planeHeight;
            } else {
                // Copy pixels one by one respecting pixelStride and rowStride.
                byte[] rowData = new byte[rowStride];
                for (int row = 0; row < planeHeight - 1; ++row) {
                    buffer.get(rowData, 0, rowStride);
                    for (int col = 0; col < planeWidth; ++col) {
                        data[offset++] = rowData[col * pixelStride];
                    }
                }
                // Last row is special in some devices and may not contain the full
                // |rowStride| bytes of data.
                // See http://developer.android.com/reference/android/media/Image.Plane.html#getBuffer()
                buffer.get(rowData, 0, Math.min(rowStride, buffer.remaining()));
                for (int col = 0; col < planeWidth; ++col) {
                    data[offset++] = rowData[col * pixelStride];
                }
            }
        }

        return data;
    }

三、设置OpenGL的使用场景:

        GLTextureCPlusVideoPlayerView继承GLSurfcaeView,实现GLSurfcaeView.Renderer接口:

package com.wangyongyao.glplay.view;

import android.content.Context;
import android.opengl.GLSurfaceView;
import android.util.AttributeSet;
import android.util.Log;


import com.wangyongyao.glplay.OpenGLPlayCallJni;
import com.wangyongyao.glplay.camerahelper.camerahelper.CameraDataHelper;
import com.wangyongyao.glplay.camerahelper.camerahelper.CameraDataListener;
import com.wangyongyao.glplay.utils.OpenGLPlayFileUtils;

import javax.microedition.khronos.egl.EGLConfig;
import javax.microedition.khronos.opengles.GL10;

/**
 * author : wangyongyao https://github.com/wangyongyao1989
 * Create Time : 2024/9/3 23:57
 * Descibe : MyyFFmpeg com.example.myyffmpeg.utils
 */
public class GLTextureCPlusVideoPlayerView extends GLSurfaceView
        implements GLSurfaceView.Renderer, CameraDataListener {


    private static String TAG = GLTextureCPlusVideoPlayerView.class.getSimpleName();
    private OpenGLPlayCallJni mJniCall;
    private Context mContext;


    private int mWidth;
    private int mHeight;
    private CameraDataHelper mCameraHelper;


    public GLTextureCPlusVideoPlayerView(Context context, OpenGLPlayCallJni jniCall) {
        super(context);
        mContext = context;
        mJniCall = jniCall;
        init();
    }

    public GLTextureCPlusVideoPlayerView(Context context, AttributeSet attrs) {
        super(context, attrs);
        mContext = context;
        init();
    }

    private void init() {
        getHolder().addCallback(this);
        setEGLContextClientVersion(3);
        setEGLConfigChooser(8, 8, 8, 8, 16, 0);
        String fragPath = OpenGLPlayFileUtils.getModelFilePath(mContext
                , "texture_video_play_frament.glsl");
        String vertexPath = OpenGLPlayFileUtils.getModelFilePath(mContext
                , "texture_video_play_vert.glsl");
        String picSrc1 = OpenGLPlayFileUtils.getModelFilePath(mContext
                , "wall.jpg");
        mCameraHelper = new CameraDataHelper(getContext(), this);
        mCameraHelper.startCamera();
        mJniCall.glTextureVideoPlayCreate(0, vertexPath, fragPath);

        setRenderer(this);
        setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY);

    }


    private void stopCameraPreview() {
        mCameraHelper.destroy();
    }


    public void onDrawFrame(GL10 gl) {
        if (mJniCall != null) {
            mJniCall.glTextureVideoPlayRender();
        }

    }

    public void onSurfaceChanged(GL10 gl, int width, int height) {
        Log.e(TAG, "onSurfaceChanged width:" + width + ",height" + height);
        if (mJniCall != null) {
            mJniCall.glTextureVideoPlayInit(null, null, width, height);
        }
        mWidth = width;
        mHeight = height;
        mCameraHelper.initialize(width, height);
    }


    @Override
    public void onSurfaceCreated(GL10 gl10, EGLConfig eglConfig) {
        Log.e(TAG, "onSurfaceCreated:");

    }


    @Override
    public void onPreviewFrame(byte[] yuvData, int width, int height) {
        mJniCall.glTextureVideoPlayDraw(yuvData, width, height, 90);
        requestRender();
    }


    public void destroyRender() {
        mJniCall.glTextureVideoPlayDestroy();
        stopCameraPreview();
    }

}

   这里需要注意的是要设置setRenderMode(GLSurfaceView.RENDERMODE_WHEN_DIRTY),等待onPreviewFrame回调数据之后在进行Texture的帧渲染。

    /**
     * The renderer only renders
     * when the surface is created, or when {@link #requestRender} is called.
     *
     * @see #getRenderMode()
     * @see #setRenderMode(int)
     * @see #requestRender()
     */
    public final static int RENDERMODE_WHEN_DIRTY = 0;

 四、JNI层把Java的数据传入C++层:

        1、Java层实现:

        定义的执行流程的顺序方法:glTextureVideoPlayeCreate -> glTextureVideoPlayeInit -> glTextureVideoPlayeCreate -> glTextureVideoPlayeDraw -> glTextureVideoPlayeRender 。

 /*********************** OpenGL Texture显示视频********************/
    public void glTextureVideoPlayCreate(int type, String vertexPath, String fragPath) {
        native_texture_video_play_create(type, vertexPath, fragPath);
    }

    public void glTextureVideoPlayDestroy() {
        native_texture_video_play_destroy();
    }

    public void glTextureVideoPlayInit(Surface surface, AssetManager assetManager
            , int width, int height) {
        native_texture_video_play_init(surface, assetManager, width, height);
    }

    public void glTextureVideoPlayRender() {
        native_texture_video_play_render();
    }

    public void glTextureVideoPlayDraw(byte[] data, int width, int height, int rotation) {
        native_texture_video_play_draw(data, width, height, rotation);
    }

    public void glTextureVideoPlaySetParameters(int params) {
        native_texture_video_play_set_parameters(params);
    }

    public int glTextureVideoPlayGetParameters() {
        return native_texture_video_play_get_parameters();
    }

    private native void native_texture_video_play_create(int type, String vertexPath
            , String fragPath);

    private native void native_texture_video_play_destroy();

    private native void native_texture_video_play_init(Surface surface
            , AssetManager assetManager
            , int width, int height);

    private native void native_texture_video_play_render();

    private native void native_texture_video_play_draw(byte[] data, int width
            , int height, int rotation);

    private native void native_texture_video_play_set_parameters(int params);

    private native int native_texture_video_play_get_parameters();

        2、JNI层实现:

/*********************** OpenGL Texture预览Camera视频********************/
extern "C"
JNIEXPORT void JNICALL
cpp_texture_video_play_creat(JNIEnv *env, jobject thiz, jint type,
                             jstring vertex,
                             jstring frag) {
    const char *vertexPath = env->GetStringUTFChars(vertex, nullptr);
    const char *fragPath = env->GetStringUTFChars(frag, nullptr);
    if (textureVideoRender == nullptr)
        textureVideoRender = new OpenglesTexureVideoRender();

    textureVideoRender->setSharderPath(vertexPath, fragPath);

    env->ReleaseStringUTFChars(vertex, vertexPath);
    env->ReleaseStringUTFChars(frag, fragPath);
}

extern "C"
JNIEXPORT void JNICALL
cpp_texture_video_play_destroy(JNIEnv *env, jobject thiz) {

}

extern "C"
JNIEXPORT void JNICALL
cpp_texture_video_play_init(JNIEnv *env, jobject thiz,
                            jobject surface,
                            jobject assetManager,
                            jint width,
                            jint height) {
    if (textureVideoRender != nullptr) {
        ANativeWindow *window = surface ? ANativeWindow_fromSurface(env, surface) : nullptr;
        auto *aAssetManager = assetManager ? AAssetManager_fromJava(env, assetManager) : nullptr;
        textureVideoRender->init(window, aAssetManager, (size_t) width, (size_t) height);
    }
}

extern "C"
JNIEXPORT void JNICALL
cpp_texture_video_play_render(JNIEnv *env, jobject thiz) {
    if (textureVideoRender != nullptr) {
        textureVideoRender->render();
    }
}

extern "C"
JNIEXPORT void JNICALL
cpp_texture_video_play_draw(JNIEnv *env, jobject obj, jbyteArray data, jint width, jint height,
                            jint rotation) {
    jbyte *bufferPtr = env->GetByteArrayElements(data, nullptr);
    jsize arrayLength = env->GetArrayLength(data);

    if (textureVideoRender != nullptr) {

        textureVideoRender->draw((uint8_t *) bufferPtr, (size_t) arrayLength, (size_t) width,
                                 (size_t) height,
                                 rotation);
    }

    env->ReleaseByteArrayElements(data, bufferPtr, 0);
}

extern "C"
JNIEXPORT void JNICALL
cpp_texture_video_play_setParameters(JNIEnv *env, jobject thiz, jint p) {

    if (textureVideoRender != nullptr) {
        textureVideoRender->setParameters((uint32_t) p);
    }

}

extern "C"
JNIEXPORT jint JNICALL
cpp_texture_video_play_getParameters(JNIEnv *env, jobject thiz) {
    if (textureVideoRender != nullptr) {
        textureVideoRender->getParameters();
    }
    return 0;

}

static const JNINativeMethod methods[] = {
        

        /*********************** OpenGL Texture显示视频********************/
        {"native_texture_video_play_create",         "(I"
                                                     "Ljava/lang/String;"
                                                     "Ljava/lang/String;)V",  (void *) cpp_texture_video_play_creat},
        {"native_texture_video_play_destroy",        "()V",                   (void *) cpp_texture_video_play_destroy},
        {"native_texture_video_play_init",           "(Landroid/view/Surface;"
                                                     "Landroid/content/res"
                                                     "/AssetManager;II)V",    (void *) cpp_texture_video_play_init},
        {"native_texture_video_play_render",         "()V",                   (void *) cpp_texture_video_play_render},
        {"native_texture_video_play_draw",           "([BIII)V",              (void *) cpp_texture_video_play_draw},
        {"native_texture_video_play_set_parameters", "(I)V",                  (void *) cpp_texture_video_play_setParameters},
        {"native_texture_video_play_get_parameters", "()I",                   (void *) cpp_texture_video_play_getParameters},


};

// 定义注册方法
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM *vm, void *reserved) {
    LOGD("动态注册");
    JNIEnv *env;
    if ((vm)->GetEnv((void **) &env, JNI_VERSION_1_6) != JNI_OK) {
        LOGD("动态注册GetEnv  fail");
        return JNI_ERR;
    }

    // 获取类引用
    jclass clazz = env->FindClass(rtmp_class_name);

    // 注册native方法
    jint regist_result = env->RegisterNatives(clazz, methods,
                                              sizeof(methods) / sizeof(methods[0]));
    if (regist_result) { // 非零true 进if
        LOGE("动态注册 fail regist_result = %d", regist_result);
    } else {
        LOGI("动态注册 success result = %d", regist_result);
    }
    return JNI_VERSION_1_6;
}

       

五、C++层的OpenGL Texture渲染实现:

       这里OpenGL相关的代码基于我的github项目:GitHub - wangyongyao1989/AndroidLearnOpenGL: OpenGL基础及运用

抽取过来进行实现的。

感兴趣的可以去阅读我的关于OpenGL的相关的博客:https://blog.csdn.net/wangyongyao1989/category_6943979.html?spm=1001.2014.3001.5482

  1、着色器程序:

        着色器程序是GLSL的文件,把存放文件夹地址传入C++层的OpenGLShader.cpp中。

  •   texture_video_play_vert.glsl顶点着色器:
#version 320 es

out vec2 v_texcoord;

in vec4 position;
in vec2 texcoord;

void main() {
    v_texcoord = texcoord;
    gl_Position =  position;
}
  •  texture_video_play_fragment.glsl片段着色器:       
#version 320 es

precision mediump float;

in vec2 v_texcoord;

uniform lowp sampler2D s_textureY;
uniform lowp sampler2D s_textureU;
uniform lowp sampler2D s_textureV;

out vec4 gl_FragColor;

void main() {
     float y, u, v, r, g, b;
     y = texture(s_textureY, v_texcoord).r;
     u = texture(s_textureU, v_texcoord).r;
     v = texture(s_textureV, v_texcoord).r;
     u = u - 0.5;
     v = v - 0.5;
     r = y + 1.403 * v;
     g = y - 0.344 * u - 0.714 * v;
     b = y + 1.770 * u;
     gl_FragColor = vec4(r, g, b, 1.0);

}

2、OpenGLShader.cpp中着色器程序编译、连接、使用。

//
// Created by MMM on 2024/8/8.
//

#include "OpenGLShader.h"

GLuint
OpenGLShader::createProgram() {
    vertexShader = loadShader(GL_VERTEX_SHADER, gVertexShaderCode);
    LOGI("=====gVertexShaderCode :%s", gVertexShaderCode);
    LOGI("======gFragmentShaderCode :%s", gFragmentShaderCode);
    if (!vertexShader) {
        checkGlError("loadShader GL_VERTEX_SHADER");
        return 0;
    }

    fraShader = loadShader(GL_FRAGMENT_SHADER, gFragmentShaderCode);

    if (!fraShader) {
        checkGlError("loadShader GL_FRAGMENT_SHADER");
        return 0;
    }

    shaderId = glCreateProgram();      //创建一个着色程序对象
    if (shaderId) {
        glAttachShader(shaderId, vertexShader);        //把着色器附加到了程序对象上
        checkGlError("glAttachShader");
        glAttachShader(shaderId, fraShader);
        checkGlError("glAttachShader");
        glLinkProgram(shaderId);   //链接程序对象
        GLint linkStatus = GL_FALSE;
        glGetProgramiv(shaderId, GL_LINK_STATUS, &linkStatus);  //检测链接着色器程序是否失败
        if (linkStatus != GL_TRUE) {
            GLint bufLength = 0;
            glGetProgramiv(shaderId, GL_INFO_LOG_LENGTH, &bufLength);
            if (bufLength) {
                char *buf = (char *) malloc(bufLength);
                if (buf) {
                    glGetProgramInfoLog(shaderId, bufLength, NULL, buf);
                    LOGE("Could not link shaderId:\n%s\n", buf);
                    free(buf);
                }
            }
            glDeleteProgram(shaderId);     //
            shaderId = 0;
        }
    }
    return shaderId;
}

/**
 * 加载着色器
 * @param shaderType
 * @param pSource
 * @return
 */
GLuint OpenGLShader::loadShader(GLenum shaderType, const char *pSource) {
    GLuint shader = glCreateShader(shaderType);     //创建着色器
    if (shader) {
        glShaderSource(shader, 1, &pSource, NULL);  //着色器源码附加到着色器对象上
        glCompileShader(shader);                    //编译着着色器
        GLint compiled = 0;
        glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
        if (!compiled) {
            GLint infoLen = 0;
            glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
            if (infoLen) {
                char *buf = (char *) malloc(infoLen);
                if (buf) {
                    glGetShaderInfoLog(shader, infoLen, NULL, buf);
                    LOGE("Could not compile shader %d:\n%s\n",
                         shaderType, buf);
                    free(buf);
                }
                glDeleteShader(shader);     //删除着色器对象
                shader = 0;
            }
        }
    }
    return shader;
}

bool OpenGLShader::getSharderPath(const char *vertexPath, const char *fragmentPath) {
    ifstream vShaderFile;
    ifstream fShaderFile;

    // ensure ifstream objects can throw exceptions:
    vShaderFile.exceptions(ifstream::failbit | ifstream::badbit);
    fShaderFile.exceptions(ifstream::failbit | ifstream::badbit);
    try {
        // open files
        vShaderFile.open(vertexPath);
        fShaderFile.open(fragmentPath);
        stringstream vShaderStream, fShaderStream;
        // read file's buffer contents into streams
        vShaderStream << vShaderFile.rdbuf();
        fShaderStream << fShaderFile.rdbuf();
        // close file handlers
        vShaderFile.close();
        fShaderFile.close();
        // convert stream into string
        vertexCode = vShaderStream.str();
        fragmentCode = fShaderStream.str();
    }
    catch (ifstream::failure &e) {
        LOGE("Could not getSharderPath error :%s", e.what());
        return false;
    }
    gVertexShaderCode = vertexCode.c_str();
    gFragmentShaderCode = fragmentCode.c_str();

    return true;
}

void OpenGLShader::printGLString(const char *name, GLenum s) {
    const char *v = (const char *) glGetString(s);
    LOGI("OpenGL %s = %s\n", name, v);
}

void OpenGLShader::checkGlError(const char *op) {
    for (GLint error = glGetError(); error; error = glGetError()) {
        LOGI("after %s() glError (0x%x)\n", op, error);
    }
}

OpenGLShader::~OpenGLShader() {
    if (vertexShader) {
        glDeleteShader(vertexShader);
    }
    if (fraShader) {
        glDeleteShader(fraShader);
    }
    vertexCode.clear();
    fragmentCode.clear();

    gVertexShaderCode = nullptr;
    gFragmentShaderCode = nullptr;
}

OpenGLShader::OpenGLShader() {

}

  

3、OpenGLTextureVideoRender.cpp进行YUV的Texture渲染:

        大致流程为:createProgram() -> createTextures() -> draw() -> render()。

  • createProgram()创建程序,获取着色器中的输入顶点坐标、输入纹理顶点坐标及uniform的参数:
int
OpenglesTexureVideoRender::createProgram() {

    m_program = lightColorShader->createProgram();
    m_vertexShader = lightColorShader->vertexShader;
    m_pixelShader = lightColorShader->fraShader;
    LOGI("OpenglesTexureVideoRender createProgram m_program:%d", m_program);

    if (!m_program) {
        LOGE("Could not create program.");
        return 0;
    }

    //Get Uniform Variables Location
    m_vertexPos = (GLuint) glGetAttribLocation(m_program, "position");
    m_textureYLoc = glGetUniformLocation(m_program, "s_textureY");
    m_textureULoc = glGetUniformLocation(m_program, "s_textureU");
    m_textureVLoc = glGetUniformLocation(m_program, "s_textureV");
    m_textureLoc = (GLuint) glGetAttribLocation(m_program, "texcoord");

    return m_program;
}
  • createTextures()分别创建YUV三个通道的纹理:
bool OpenglesTexureVideoRender::createTextures() {
    auto widthY = (GLsizei) m_width;
    auto heightY = (GLsizei) m_height;

    glActiveTexture(GL_TEXTURE0);
    glGenTextures(1, &m_textureIdY);
    glBindTexture(GL_TEXTURE_2D, m_textureIdY);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, widthY, heightY, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE,
                 nullptr);

    if (!m_textureIdY) {
//        check_gl_error("Create Y texture");
        return false;
    }

    GLsizei widthU = (GLsizei) m_width / 2;
    GLsizei heightU = (GLsizei) m_height / 2;

    glActiveTexture(GL_TEXTURE1);
    glGenTextures(1, &m_textureIdU);
    glBindTexture(GL_TEXTURE_2D, m_textureIdU);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, widthU, heightU, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE,
                 nullptr);

    if (!m_textureIdU) {
//        check_gl_error("Create U texture");
        return false;
    }

    GLsizei widthV = (GLsizei) m_width / 2;
    GLsizei heightV = (GLsizei) m_height / 2;

    glActiveTexture(GL_TEXTURE2);
    glGenTextures(1, &m_textureIdV);
    glBindTexture(GL_TEXTURE_2D, m_textureIdV);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
    glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
    glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
    glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, widthV, heightV, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE,
                 nullptr);

    if (!m_textureIdV) {
//        check_gl_error("Create V texture");
        return false;
    }

    return true;
}
  • draw()分离出YUV的每个通道的数据集:

void OpenglesTexureVideoRender::draw(uint8_t *buffer, size_t length
                                    , size_t width, size_t height,
                                     float rotation) {
    m_length = length;
    m_rotation = rotation;

    video_frame frame{};
    frame.width = width;
    frame.height = height;
    frame.stride_y = width;
    frame.stride_uv = width / 2;
    frame.y = buffer;
    frame.u = buffer + width * height;
    frame.v = buffer + width * height * 5 / 4;

    updateFrame(frame);
}


void OpenglesTexureVideoRender::updateFrame(const video_frame &frame) {
    m_sizeY = frame.width * frame.height;
    m_sizeU = frame.width * frame.height / 4;
    m_sizeV = frame.width * frame.height / 4;

    if (m_pDataY == nullptr || m_width != frame.width || m_height != frame.height) {
        m_pDataY = std::make_unique<uint8_t[]>(m_sizeY + m_sizeU + m_sizeV);
        m_pDataU = m_pDataY.get() + m_sizeY;
        m_pDataV = m_pDataU + m_sizeU;
        isProgramChanged = true;
    }

    m_width = frame.width;
    m_height = frame.height;

    if (m_width == frame.stride_y) {
        memcpy(m_pDataY.get(), frame.y, m_sizeY);
    } else {
        uint8_t *pSrcY = frame.y;
        uint8_t *pDstY = m_pDataY.get();

        for (int h = 0; h < m_height; h++) {
            memcpy(pDstY, pSrcY, m_width);

            pSrcY += frame.stride_y;
            pDstY += m_width;
        }
    }

    if (m_width / 2 == frame.stride_uv) {
        memcpy(m_pDataU, frame.u, m_sizeU);
        memcpy(m_pDataV, frame.v, m_sizeV);
    } else {
        uint8_t *pSrcU = frame.u;
        uint8_t *pSrcV = frame.v;
        uint8_t *pDstU = m_pDataU;
        uint8_t *pDstV = m_pDataV;

        for (int h = 0; h < m_height / 2; h++) {
            memcpy(pDstU, pSrcU, m_width / 2);
            memcpy(pDstV, pSrcV, m_width / 2);

            pDstU += m_width / 2;
            pDstV += m_width / 2;

            pSrcU += frame.stride_uv;
            pSrcV += frame.stride_uv;
        }
    }

    isDirty = true;
}
  • render()每个渲染时更新YUV三个纹理:

void OpenglesTexureVideoRender::render() {
//    LOGI("OpenglesTexureVideoRender render");

    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);

    if (!updateTextures() || !useProgram()) return;

    glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}


bool OpenglesTexureVideoRender::updateTextures() {
    if (!m_textureIdY 
            && !m_textureIdU
            && !m_textureIdV 
            && !createTextures()) return false;
                        
//    LOGI("OpenglesTexureVideoRender updateTextures");

    if (isDirty) {
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, m_textureIdY);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, (GLsizei) m_width, (GLsizei) m_height, 0,
                     GL_LUMINANCE, GL_UNSIGNED_BYTE, m_pDataY.get());

        glActiveTexture(GL_TEXTURE1);
        glBindTexture(GL_TEXTURE_2D, m_textureIdU);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, (GLsizei) m_width / 2, (GLsizei) m_height / 2,
                     0,
                     GL_LUMINANCE, GL_UNSIGNED_BYTE, m_pDataU);

        glActiveTexture(GL_TEXTURE2);
        glBindTexture(GL_TEXTURE_2D, m_textureIdV);
        glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, (GLsizei) m_width / 2, (GLsizei) m_height / 2,
                     0,
                     GL_LUMINANCE, GL_UNSIGNED_BYTE, m_pDataV);

        isDirty = false;

        return true;
    }

    return false;
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2120713.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Reflection Llama-3.1 70B:目前最强大的开源大语言模型

Reflection Llama-3.1 70B:目前最强大的开源大语言模型 模型特点性能表现使用建议未来展望 近日,一个名为Reflection Llama-3.1 70B的新型大语言模型(LLM)引起了业界广泛关注。该模型采用了名为"Reflection-Tuning"的创新训练技术,能够自主检测推理过程中的错误并及时…

入门必看!高薪+自由职业的3D建模师有多香?

‌3D建模‌是将现实世界中的物体或场景转化为三维数字模型的过程。这项技术连接着现实与虚拟两个世界&#xff0c;通过3D建模&#xff0c;我们可以将名胜古迹、雕塑艺术品等以三维的形式呈现于互联网&#xff0c;让用户体验到更加真实、立体的视觉感受。3D建模的应用领域广泛&a…

【基于 Spring Boot 的二手交易平台】

构建一个基于 Spring Boot 的二手交易平台是一个涉及多个组件和技术栈的复杂项目。以下是一个基本的框架概述&#xff0c;可以帮助你开始搭建这样一个平台&#xff1a; 技术栈选择 Spring Boot: 用于快速开发 RESTful Web 服务。数据库: MySQL, PostgreSQL, 或其他关系型数据…

华为 HCIP-Datacom H12-821 题库 (15)

有需要题库的可以加下方Q群 V群进行学习交流 1.以下关于 OSPF 路由聚合的描述&#xff0c;错误的是哪一项&#xff1f; A、OSPF 中任意一台路由器都可以进行路由聚合的操作 B、OSPF 有两种路由聚合方式&#xff1a;ABR 聚合和ASBR 聚合 C、路由聚合是指将相同前缀的路由信息聚合…

018.PL-SQL编程—包

我 的 个 人 主 页&#xff1a;&#x1f449;&#x1f449; 失心疯的个人主页 &#x1f448;&#x1f448; 入 门 教 程 推 荐 &#xff1a;&#x1f449;&#x1f449; Python零基础入门教程合集 &#x1f448;&#x1f448; 虚 拟 环 境 搭 建 &#xff1a;&#x1f449;&…

【Prompt Engineering提示:Active-Prompt、方向性刺激提示、PAL(程序辅助语言模型)】

Active-Prompt 思维链&#xff08;CoT&#xff09;方法依赖于一组固定的人工注释范例。问题在于&#xff0c;这些范例可能不是不同任务的最有效示例。为了解决这个问题&#xff0c;Diao 等人&#xff08;2023&#xff09;(opens in a new tab)最近提出了一种新的提示方法&…

Python的情感词典情感分析和情绪计算

一.大连理工中文情感词典 情感分析 (Sentiment Analysis)和情绪分类 (Emotion Classification&#xff09;都是非常重要的文本挖掘手段。情感分析的基本流程如下图所示&#xff0c;通常包括&#xff1a; 自定义爬虫抓取文本信息&#xff1b;使用Jieba工具进行中文分词、词性标…

015.PL-SQL编程—块

我 的 个 人 主 页&#xff1a;&#x1f449;&#x1f449; 失心疯的个人主页 &#x1f448;&#x1f448; 入 门 教 程 推 荐 &#xff1a;&#x1f449;&#x1f449; Python零基础入门教程合集 &#x1f448;&#x1f448; 虚 拟 环 境 搭 建 &#xff1a;&#x1f449;&…

Python专项进阶——初步认识Numpy库

NumPy是Numeric Python的缩写&#xff0c;一个优秀的开源科学计算库。 个人理解&#xff0c;NumPy是一个主要围绕着数值数组对象&#xff08;ndarray&#xff09;&#xff08;或叫做矩阵&#xff09;进行各种操作的对象、函数集合。提供很多实用的数学函数&#xff0c;涵盖线性…

【大数据】Hadoop里的“MySQL”——Hive,干货满满

【大数据】Hadoop里的“MySQL”——Hive&#xff0c;干货满满 文章脉络 Hive架构 HQL 表类型 创建表语法 分区 数据导入导出 函数 内置函数 UDF Java Python 在阅读本文前&#xff0c;请确保已经对Hadoop的三大组件&#xff08;HDFS、MapReduce、YARN&#xff09;有…

启动动效流程梳理(一)

因为项目涉及一些启动相关的方案&#xff0c;以及平常处理问题的时候会遇到很多启动响应的问题&#xff0c;所以对启动动效的拉起流程进行一个全面的梳理&#xff0c;同时也借此即会对U版本的动效流程做一个初步的了解吧。 startActivityUnchecked&#xff1a; 启动的流程就先不…

Windows10 如何配置python IDE

Windows10 如何配置python IDE 前言Python直接安装&#xff08;快速上手&#xff09;Step1.找到网址Step2.选择版本&#xff08;非常重要&#xff09;Step3. 安装过程Step4. python测试 Anaconda安装&#xff08;推荐&#xff09;Step1. 找到网址Step2. 下载anacondaStep3. 安装…

动手学深度学习(pytorch)学习记录25-汇聚层(池化层)[学习记录]

目录 汇聚层(池化层)&#xff1a;填充和步幅多通道 汇聚层(池化层)&#xff1a; 降低卷积层对位置的敏感性&#xff0c;同时降低对空间降采样表示的敏感性。 汇聚层和卷积层的运动方式一样&#xff0c;从左上角向右下角移动指定步幅&#xff0c;汇聚层执行的是“采样”操作。…

【观察】联想数据网络产品线全新升级,构建高质量的AI网络底座

毫无疑问&#xff0c;数字经济时代&#xff0c;算力已成为推动千行百业数字化转型&#xff0c;赋能经济蓬勃发展的重要引擎&#xff0c;同时也成为衡量国家综合实力的重要指标之一。特别是随着以ChatGPT、GPT4为代表的AI大模型的发布&#xff0c;不仅对数据中心基础设施提出了巨…

ZBrush与Blender雕刻功能哪个更好些?

选择正确的3D软件首先会让你的创作过程更加轻松&#xff0c;尤其是在动画或大片电影制作方面。不同的软件提供不同的功能&#xff0c;并倾向于专注于特定领域&#xff0c;如绘画、动画或雕刻。如果你选择了适合你风格和目标的软件&#xff0c;你可以创作出极具创意的作品。 在…

数据结构:浅谈树的遍历方式

遍历的含义就是把树的所有节点&#xff08;Node&#xff09;按照某种顺序访问一遍。包括前序&#xff0c;中序&#xff0c;后续&#xff0c;广度优先&#xff08;队列&#xff09;&#xff0c;深度优先&#xff08;栈&#xff09;5 种遍历方法。之所以叫前序、中序、后序遍历&a…

python 学习一张图

python学习一张图&#xff0c;python的特点的是学的快&#xff0c;一段时间不用&#xff0c;忘记的也快&#xff0c;弄一张图及一些入门案例吧。 写一个简单的测试&#xff1a; #!/usr/bin/python # -*- coding: UTF-8 -*- import osdef add_num(a, b):return a bif __name__…

【AIGC】探索超凡记忆:SuperMemory,你的私人智能书签助手!

&#x1f9e0; 构建你的第二大脑&#xff1a;SuperMemory 在这个信息爆炸的时代&#xff0c;我们每天都在互联网上浏览和保存大量的信息&#xff0c;但往往这些宝贵的数据就像被扔进了黑洞&#xff0c;再也没有被回顾和利用。 SuperMemory 开源项目应运而生&#xff0c;旨在解…

深度学习基础案例4--运用动态学习率构建CNN卷积神经网络实现的运动鞋识别(测试集的准确率84%)

&#x1f368; 本文为&#x1f517;365天深度学习训练营 中的学习记录博客&#x1f356; 原作者&#xff1a;K同学啊 前言 前几天一直很忙&#xff0c;一直在数学建模中&#xff0c;没有来得及更新&#xff0c;接下来将恢复正常这一次的案例很有意思&#xff1a;在学习动态调整…

Nexus基本介绍

Nexus是Sonatype公司发布的一款强大的仓库&#xff08;Repository&#xff09;管理软件&#xff0c;常被用来搭建Maven私服&#xff0c;因此也被称为“Maven仓库管理器”。 Nexus 的Web界面&#xff1a; Nexus的主要功能与优势 仓库管理&#xff1a; Nexus支持多种仓库类型…