Android camera2

news2024/11/6 5:48:20

一、序言

为了对阶段性的知识积累、方便以后调查问题,特做此文档!
将以camera app 使用camera2 api进行分析。
(1)、打开相机 openCamera
(2)、创建会话 createCaptureSession
(3)、开始预览 setRepeatingRequest
(4)、停止预览 stopRepeating
(5)、关闭相机 closeCamera

二、打开相机

1、camera app在使用camera2 api的第一步就是调用cameraManager的openCamera接口此接口有三个参数

在这里插入图片描述

2、调用时序

2.1 调用openCamera接口

在这里插入图片描述

2.2 内部调用到openCameraDeviceUserAsync

调用内部方法getCameraCharacteristics查询camera设备的相关属性,这里会通过binder调用到CameraService(c++), 主要是camera 的前后朝向、图像显示角度(0度、90度、180度、270度)

在这里插入图片描述
new CameraDeviceImpl 的时候将一些变量保存在该对象内,这个CameraDeviceImpl会返回一个CameraDeviceCallbacks后面会将这个callback通过CameraService的接口connect()给到CameraDeviceClientBase, 后续不管是预览、录制、拍照存在帧相关的error都会通过CameraDeviceCallbacks --> CaptureCallback
在这里插入图片描述接着去new client
在这里插入图片描述走到makeClient的方法里面,会根据camera api的版本选择new 不同的client
在这里插入图片描述

new CameraDeviceClient -> new Camera2ClientBase -> new CameraDeviceClientBase() -> new BasicClient(), 在Camera2ClientBase的构造函数里面还会new Camera3Device

在这里插入图片描述
在这里插入图片描述
等new client结束后就会去调用initialize
在这里插入图片描述
CameraDeviceClient::initialize() -> Camera2ClientBase::initialize() -> Camera2ClientBase::initializeImpl() -> Camera3Device::initialize -> CameraProviderManager::openSession, CameraProviderManager类可以理解为hal camera的代理类

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
Camera3Device持有CameraProviderManager对象,而CameraProviderManager可以理解native和hal camera直接的桥梁,它会枚举出camera provider以及这些provider提供的摄像头设备,并提供方法去访问它们.
在这里插入图片描述
在这里插入图片描述
这里必须要插播一下CameraProviderManager的实例化和初始化了,如下图在CameraService 被new的时候就会自动走进onFirstRef(), 然后在该方法里面去new CameraProviderManager,紧接着调用mCameraProviderManager的initialize()方法,initialize()第二个参数是默认参数,直接赋值了HardwareServiceInteractionProxy的对象sHardwareServiceInteractionProxy, HardwareServiceInteractionProxy有两个方法,一个是registerForNotifications(), 一个是getService(), 这里getService()会拿到进程android.hardware.camera.provider@2.4-service中CameraProvider类的远端代理,而CameraProvider中的initialize()方法在CameraProvider的结构体中会被调用,在CameraProvider::initialize() 会去通过hw_get_module()方法打开动态库(camera*.so), 然后CameraProvider::
CameraModule中的camera_module_t *mModule 持有这个真正实现操作camera device的so句柄。

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
接着看addDevice方法

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
好了插播到此结束,接着上面的2.15 deviceInfo3->mInterface->open, mInterface就是CameraDevice,并且持有CameraModule,那么这里就会调用到CameraModule::open,紧接着调用到hal层真正实现者v4l2_camera_hal中的open

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

三、创建会话

1、在opencamera成功后,就会返回一个对象CameraDeviceImpl,后面在创建的会话和请求预览的时候都会调用该对象的方法。

2、创建会话会调用CameraDeviceImpl::createCaptureSession,分别有三个参数

在这里插入图片描述
3、创建会话代码流程分析
frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

    // 当open camera成功之后,就会返回CameraDeviceImpl对象,然后在opened回调里面去创建会话,
    // 需要传进来两个surface,一个用于拍照,一个用于预览
    public void createCaptureSession(List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler handler)
            throws CameraAccessException {
        List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        createCaptureSessionInternal(null, outConfigurations, callback,
                checkAndWrapHandler(handler), /*operatingMode*/ICameraDeviceUser.NORMAL_MODE,
                /*sessionParams*/ null);
    }

frameworks/base/core/java/android/hardware/camera2/params/OutputConfiguration.java

    public OutputConfiguration(@NonNull Surface surface) {
        this(SURFACE_GROUP_ID_NONE, surface, ROTATION_0);
    }


    
    @SystemApi
    public OutputConfiguration(int surfaceGroupId, @NonNull Surface surface, int rotation) {
        checkNotNull(surface, "Surface must not be null");
        checkArgumentInRange(rotation, ROTATION_0, ROTATION_270, "Rotation constant");
        mSurfaceGroupId = surfaceGroupId;
        mSurfaceType = SURFACE_TYPE_UNKNOWN;
        mSurfaces = new ArrayList<Surface>(); // 有效值其实只有surface
        mSurfaces.add(surface);
        mRotation = rotation;
        mConfiguredSize = SurfaceUtils.getSurfaceSize(surface);
        mConfiguredFormat = SurfaceUtils.getSurfaceFormat(surface);
        mConfiguredDataspace = SurfaceUtils.getSurfaceDataspace(surface);
        mConfiguredGenerationId = surface.getGenerationId();
        mIsDeferredConfig = false;
        mIsShared = false;
        mPhysicalCameraId = null;
        mIsMultiResolution = false;
        mSensorPixelModesUsed = new ArrayList<Integer>();
    }

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java
紧接着上面的createCaptureSessionInternal

private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List<OutputConfiguration> outputConfigurations,
            CameraCaptureSession.StateCallback callback, Executor executor,
            int operatingMode, CaptureRequest sessionParams) throws CameraAccessException {
            .......
            try {
                // configure streams and then block until IDLE
                // 这里开始创建streams
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                        operatingMode, sessionParams, createSessionStartTime);
                if (configureSuccess == true && inputConfig != null) {
                    input = mRemoteDevice.getInputSurface();
                }
            } catch (CameraAccessException e) {
                configureSuccess = false;
                pendingException = e;
                input = null;
                if (DEBUG) {
                    Log.v(TAG, "createCaptureSession - failed with exception ", e);
                }
            }

            // Fire onConfigured if configureOutputs succeeded, fire onConfigureFailed otherwise.
            CameraCaptureSessionCore newSession = null;
            if (isConstrainedHighSpeed) {
                ArrayList<Surface> surfaces = new ArrayList<>(outputConfigurations.size());
                for (OutputConfiguration outConfig : outputConfigurations) {
                    surfaces.add(outConfig.getSurface());
                }
                StreamConfigurationMap config =
                    getCharacteristics().get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                SurfaceUtils.checkConstrainedHighSpeedSurfaces(surfaces, /*fpsRange*/null, config);

                newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                        callback, executor, this, mDeviceExecutor, configureSuccess,
                        mCharacteristics);
            } else {
                // 上面创建了stream之后就会去创建session, 后面又用session对象去请求预览camera
                // 创建session就是直接一个java的对象
                newSession = new CamerraCaptureSessionImpl(mNextSessionId++, input,
                        callback, executor, this, mDeviceExecutor, configureSuccess);
            }

            // TODO: wait until current session closes, then create the new session
            mCurrentSession = newSession;

            if (pendingException != null) {
                throw pendingException;
            }

            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
        }
    }

构建Streams

 public boolean configureStreamsChecked(InputConfiguration inputConfig,
            List<OutputConfiguration> outputs, int operatingMode, CaptureRequest sessionParams,
            long createSessionStartTime)
                    throws CameraAccessException {
        // Treat a null input the same an empty list
        if (outputs == null) {
            outputs = new ArrayList<OutputConfiguration>();
        }
        if (outputs.size() == 0 && inputConfig != null) {
            throw new IllegalArgumentException("cannot configure an input stream without " +
                    "any output streams");
        }
        
        // 传进来的inputConfig就是 null
        checkInputConfiguration(inputConfig);

        boolean success = false;

        synchronized(mInterfaceLock) {
            checkIfCameraClosedOrInError();
            // Streams to create
            HashSet<OutputConfiguration> addSet = new HashSet<OutputConfiguration>(outputs);
            // Streams to delete
            List<Integer> deleteList = new ArrayList<Integer>();

            // Determine which streams need to be created, which to be deleted
            // 这里新创建的CameraDeviceImpl 对象 mConfiguredOutputs.size() 都是0
            for (int i = 0; i < mConfiguredOutputs.size(); ++i) {
                int streamId = mConfiguredOutputs.keyAt(i);
                OutputConfiguration outConfig = mConfiguredOutputs.valueAt(i);

                if (!outputs.contains(outConfig) || outConfig.isDeferredConfiguration()) {
                    // Always delete the deferred output configuration when the session
                    // is created, as the deferred output configuration doesn't have unique surface
                    // related identifies.
                    deleteList.add(streamId);
                } else {
                    addSet.remove(outConfig);  // Don't create a stream previously created
                }
            }

            mDeviceExecutor.execute(mCallOnBusy);
            // 这里新创建的CameraDeviceImpl 对象, mRepeatingRequestId == REQUEST_ID_NONE , 
            // stopRepeating()等于什么都没有做
            stopRepeating();

            try {
                waitUntilIdle();
                // 调到 CameraDeviceClient::beginConfigure 中这里是个空实现,什么都没有做
                mRemoteDevice.beginConfigure();

                ......
                // Delete all streams first (to free up HW resources)
                for (Integer streamId : deleteList) {
                    mRemoteDevice.deleteStream(streamId);
                    mConfiguredOutputs.delete(streamId);
                }

                // Add all new streams
                // 根据新的OutputConfiguration创建stream
                for (OutputConfiguration outConfig : outputs) {`
                    if (addSet.contains(outConfig)) {
                       // 这里创建了Camera3OutputStream
                        int streamId = mRemoteDevice.createStream(outConfig);
                        mConfiguredOutputs.put(streamId, outConfig);
                    }
                }

                int offlineStreamIds[];
                if (sessionParams != null) {
                    // 这里将stream的一些配置给到hal层,最终会调用到V4L2Camera::setupStreams
                    offlineStreamIds = mRemoteDevice.endConfigure(operatingMode,
                            sessionParams.getNativeCopy(), createSessionStartTime);
                } else {
                    offlineStreamIds = mRemoteDevice.endConfigure(operatingMode, null,
                            createSessionStartTime);
                }
                ......
                success = true;
            } catch (IllegalArgumentException e) {
                // OK. camera service can reject stream config if it's not supported by HAL
                // This is only the result of a programmer misusing the camera2 api.
                Log.w(TAG, "Stream configuration failed due to: " + e.getMessage());
                return false;
            } catch (CameraAccessException e) {
                if (e.getReason() == CameraAccessException.CAMERA_IN_USE) {
                    throw new IllegalStateException("The camera is currently busy." +
                            " You must wait until the previous operation completes.", e);
                }
                throw e;
            } finally {
                if (success && outputs.size() > 0) {
                    mDeviceExecutor.execute(mCallOnIdle);
                } else {
                    // Always return to the 'unconfigured' state if we didn't hit a fatal error
                    mDeviceExecutor.execute(mCallOnUnconfigured);
                }
            }
        }

        return success;
    }

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient
根据outputConfiguration去创建stream


binder::Status CameraDeviceClient::createStream(
        const hardware::camera2::params::OutputConfiguration &outputConfiguration,
        /*out*/
        int32_t* newStreamId) {
    ATRACE_CALL();

    binder::Status res;
    if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

    Mutex::Autolock icl(mBinderSerializationLock);

    // outputConfiguration是持有surface, surface中持有graphicBufferProducer,这个producer就是
    // 给到camera hal去填充camera数据的
    const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();
    size_t numBufferProducers = bufferProducers.size();
    bool deferredConsumer = outputConfiguration.isDeferred();
    bool isShared = outputConfiguration.isShared();
    String8 physicalCameraId = String8(outputConfiguration.getPhysicalCameraId());
    bool deferredConsumerOnly = deferredConsumer && numBufferProducers == 0;
    bool isMultiResolution = outputConfiguration.isMultiResolution();
    ..........
    std::vector<sp<Surface>> surfaces;
    std::vector<sp<IBinder>> binders;
    ..........
    OutputStreamInfo streamInfo;
    bool isStreamInfoValid = false;
    const std::vector<int32_t> &sensorPixelModesUsed =
            outputConfiguration.getSensorPixelModesUsed();
    for (auto& bufferProducer : bufferProducers) {
        // Don't create multiple streams for the same target surface
        // 一个stream 可以对应多个surface,但是一个surface只能对应一个stream
        sp<IBinder> binder = IInterface::asBinder(bufferProducer);
        ssize_t index = mStreamMap.indexOfKey(binder);
        // if 走进去说明这个surface 已经对应了 stream,之前已经保存在了mStreamMap中
        if (index != NAME_NOT_FOUND) {
            String8 msg = String8::format("Camera %s: Surface already has a stream created for it "
                    "(ID %zd)", mCameraIdStr.string(), index);
            ALOGW("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(CameraService::ERROR_ALREADY_EXISTS, msg.string());
        }

        // 根据前面app申请的surface,获取其中的bufferProducer,然后去创建streamInfo和 c++ surface,
        // 并返回
        sp<Surface> surface;
        res = SessionConfigurationUtils::createSurfaceFromGbp(streamInfo,
                isStreamInfoValid, surface, bufferProducer, mCameraIdStr,
                mDevice->infoPhysical(physicalCameraId), sensorPixelModesUsed);
        .......
        binders.push_back(IInterface::asBinder(bufferProducer));
        surfaces.push_back(surface);
    }

    // If mOverrideForPerfClass is true, do not fail createStream() for small
    // JPEG sizes because existing createSurfaceFromGbp() logic will find the
    // closest possible supported size.

    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector<int> surfaceIds;
      ......
    } else {
       // 这里走到Camera3Device中,注意这里的streamId、surfaceIds是要在createStream进行赋值的,而surfaces给到Camera3OutputStream
       // 中,createStream 创建一个Camera3OutputStream 就会streamId++
        err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
                streamInfo.height, streamInfo.format, streamInfo.dataSpace,
                static_cast<camera_stream_rotation_t>(outputConfiguration.getRotation()),
                &streamId, physicalCameraId, streamInfo.sensorPixelModesUsed, &surfaceIds,
                outputConfiguration.getSurfaceSetID(), isShared, isMultiResolution);
    }

    if (err != OK) {
       ......
    } else {
        int i = 0;
        // binders 中对应的是每个surface的bufferProducer,即可以理解binder中存在的就是surface
        for (auto& binder : binders) {
            ALOGV("%s: mStreamMap add binder %p streamId %d, surfaceId %d",
                    __FUNCTION__, binder.get(), streamId, i);
            // streamMap 一个surface的bufferProducer, 对应一个StreamSurfaceId对象,
            // 该对象又存一个streamId和对应的surfaceIds[i], 这个streamId就是创建第几个
            // Camera3OutputStream,surfaceIds[i] 存在的都是0;
            mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
            i++;
        }
        
        // 一个streamId对应一个outputConfiguration
        mConfiguredOutputs.add(streamId, outputConfiguration);
        // 每个Camera3OutputStream 和 streamInfo对应关系
        mStreamInfoMap[streamId] = streamInfo;

        ALOGV("%s: Camera %s: Successfully created a new stream ID %d for output surface"
                    " (%d x %d) with format 0x%x.",
                  __FUNCTION__, mCameraIdStr.string(), streamId, streamInfo.width,
                  streamInfo.height, streamInfo.format);

        // Set transform flags to ensure preview to be rotated correctly.
        res = setStreamTransformLocked(streamId);

        // Fill in mHighResolutionCameraIdToStreamIdSet map
        const String8 &cameraIdUsed =
                physicalCameraId.size() != 0 ? physicalCameraId : mCameraIdStr;
        const char *cameraIdUsedCStr = cameraIdUsed.string();
        // Only needed for high resolution sensors
        if (mHighResolutionSensors.find(cameraIdUsedCStr) !=
                mHighResolutionSensors.end()) {
            mHighResolutionCameraIdToStreamIdSet[cameraIdUsedCStr].insert(streamId);
        }

        // 这个返回给app
        *newStreamId = streamId;
    }

    return res;
}

frameworks/av/services/camera/libcameraservice/utils/SessionConfigurationUtils.cpp

// 构建OutputStreamInfo,从app 申请的surface 获取wdith、height、fromat等信息 
binder::Status SessionConfigurationUtils::createSurfaceFromGbp(
        OutputStreamInfo& streamInfo, bool isStreamInfoValid,
        sp<Surface>& surface, const sp<IGraphicBufferProducer>& gbp,
        const String8 &logicalCameraId, const CameraMetadata &physicalCameraMetadata,
        const std::vector<int32_t> &sensorPixelModesUsed){
    .......
    // 其实前面的surface 中的gbp,new 一个c++ 层的Surface对象,C++ Surface 继承自ANativeWindow
    surface = new Surface(gbp, useAsync);
    ANativeWindow *anw = surface.get();

    int width, height, format;
    android_dataspace dataSpace;
    
    
    // 从ANativeWindow 中获取surface的width、height、format等,
    if ((err = anw->query(anw, NATIVE_WINDOW_WIDTH, &width)) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface width: %s (%d)",
                 logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    if ((err = anw->query(anw, NATIVE_WINDOW_HEIGHT, &height)) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface height: %s (%d)",
                logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    if ((err = anw->query(anw, NATIVE_WINDOW_FORMAT, &format)) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface format: %s (%d)",
                logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    if ((err = anw->query(anw, NATIVE_WINDOW_DEFAULT_DATASPACE,
            reinterpret_cast<int*>(&dataSpace))) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface dataspace: %s (%d)",
                logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    ......
    // isStreamInfoValid 传进来就是false,然后将上面的信息给到streaminfo
    if (!isStreamInfoValid) {
        streamInfo.width = width;
        streamInfo.height = height;
        streamInfo.format = format;
        streamInfo.dataSpace = dataSpace;
        streamInfo.consumerUsage = consumerUsage;
        streamInfo.sensorPixelModesUsed = overriddenSensorPixelModes;
        return binder::Status::ok();
    }
    ......
    }
    return binder::Status::ok();
}

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

        bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera_stream_rotation_t rotation, int *id,
        const String8& physicalCameraId, const std::unordered_set<int32_t> &sensorPixelModesUsed,
        std::vector<int> *surfaceIds, int streamSetId, bool isShared, bool isMultiResolution,
        uint64_t consumerUsage) {
    ......
    sp<Camera3OutputStream> newStream;
    ......
    if (format == HAL_PIXEL_FORMAT_BLOB) {
     ......
    } else {
        // new 一个 Camera3OutputStream
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, streamSetId,
                isMultiResolution);
    }

    size_t consumerCount = consumers.size();
    for (size_t i = 0; i < consumerCount; i++) {
        // 这里Camera3OutputStream::getSurfaceId 的实现就是返回0;
        int id = newStream->getSurfaceId(consumers[i]);
        if (id < 0) {
            SET_ERR_L("Invalid surface id");
            return BAD_VALUE;
        }
        // 这里将前面传进来的surface其中的Id都存到surfaceIds中 
        if (surfaceIds != nullptr) {
            surfaceIds->push_back(id);
        }
    }

    newStream->setStatusTracker(mStatusTracker);

    newStream->setBufferManager(mBufferManager);

    newStream->setImageDumpMask(mImageDumpMask);

    res = mOutputStreams.add(mNextStreamId, newStream);
    if (res < 0) {
        SET_ERR_L("Can't add new stream to set: %s (%d)", strerror(-res), res);
        return res;
    }

    mSessionStatsBuilder.addStream(mNextStreamId);

    *id = mNextStreamId++;
    mNeedConfig = true;

    // Continue captures if active at start
    if (wasActive) {
        ALOGV("%s: Restarting activity to reconfigure streams", __FUNCTION__);
        // Reuse current operating mode and session parameters for new stream config
        res = configureStreamsLocked(mOperatingMode, mSessionParams);
        if (res != OK) {
            CLOGE("Can't reconfigure device for new stream %d: %s (%d)",
                    mNextStreamId, strerror(-res), res);
            return res;
        }
        internalResumeLocked();
    }
    ALOGV("Camera %s: Created new stream", mId.string());
    return OK;
}

frameworks/av/services/camera/libcameraservice/device3/Camera3OutputStream.cpp

Camera3OutputStream::Camera3OutputStream(int id,
        sp<Surface> consumer,
        uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera_stream_rotation_t rotation,
        nsecs_t timestampOffset, const String8& physicalCameraId,
        const std::unordered_set<int32_t> &sensorPixelModesUsed,
        int setId, bool isMultiResolution) :
        Camera3IOStreamBase(id, CAMERA_STREAM_OUTPUT, width, height,
                            /*maxSize*/0, format, dataSpace, rotation,
                            physicalCameraId, sensorPixelModesUsed, setId, isMultiResolution),
        mConsumer(consumer),
        mTransform(0),
        mTraceFirstBuffer(true),
        mUseBufferManager(false),
        mTimestampOffset(timestampOffset),
        mConsumerUsage(0),
        mDropBuffers(false),
        mDequeueBufferLatency(kDequeueLatencyBinSize) {

    if (mConsumer == NULL) {
        ALOGE("%s: Consumer is NULL!", __FUNCTION__);
        mState = STATE_ERROR;
    }

    bool needsReleaseNotify = setId > CAMERA3_STREAM_SET_ID_INVALID;
    mBufferProducerListener = new BufferProducerListener(this, needsReleaseNotify);
}

时序图如下:
在这里插入图片描述

四、开始预览

1、从CameraManger -> openCamera() 获取CameraDeviceImpl ,然后调用CameraDeviceImpl的createCaptureSession的去创建会话,获取CameraCaptureSessionImpl对象,接着调用CameraCaptureSessionImpl的setRepeatingRequest去预览。

frameworks/base/core/java/android/hardware/camera2/impl/CameraCaptureSessionImpl.java

  public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
            Handler handler) throws CameraAccessException {
        checkRepeatingRequest(request);

        synchronized (mDeviceImpl.mInterfaceLock) {
            checkNotClosed();

            handler = checkHandler(handler, callback);

            if (DEBUG) {
                Log.v(TAG, mIdString + "setRepeatingRequest - request " + request + ", callback " +
                        callback + " handler" + " " + handler);
            }

            return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
                    createCaptureCallbackProxy(handler, callback), mDeviceExecutor));
        }
    }

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

            Executor executor) throws CameraAccessException {
        List<CaptureRequest> requestList = new ArrayList<CaptureRequest>();
        requestList.add(request);
        return submitCaptureRequest(requestList, callback, executor, /*streaming*/true);
    }
 private int submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallback callback,
            Executor executor, boolean repeating) throws CameraAccessException {

            ......
            // 如果打开过了就先停掉,然后再打开
            if (repeating) {
                stopRepeating();
            }

            SubmitInfo requestInfo;

            CaptureRequest[] requestArray = requestList.toArray(new CaptureRequest[requestList.size()]);
            // Convert Surface to streamIdx and surfaceIdx
            for (CaptureRequest request : requestArray) {
                request.convertSurfaceToStreamId(mConfiguredOutputs);
            }
            
            // 调到CameraDeviceClient的submitRequestList
            requestInfo = mRemoteDevice.submitRequestList(requestArray, repeating);
            if (DEBUG) {
                Log.v(TAG, "last frame number " + requestInfo.getLastFrameNumber());
            }

             ......

            if (repeating) {
                if (mRepeatingRequestId != REQUEST_ID_NONE) {
                    checkEarlyTriggerSequenceCompleteLocked(mRepeatingRequestId,
                            requestInfo.getLastFrameNumber(),
                            mRepeatingRequestTypes);
                }
                mRepeatingRequestId = requestInfo.getRequestId();
                mRepeatingRequestTypes = getRequestTypes(requestArray);
            } else {
                mRequestLastFrameNumbersList.add(
                    new RequestLastFrameNumbersHolder(requestList, requestInfo));
            }

            if (mIdle) {
                mDeviceExecutor.execute(mCallOnActive);
            }
            mIdle = false;

            return requestInfo.getRequestId();
        }
    }

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

binder::Status CameraDeviceClient::submitRequestList(
        const std::vector<hardware::camera2::CaptureRequest>& requests,
        bool streaming,
        /*out*/
        hardware::camera2::utils::SubmitInfo *submitInfo) {
      
    ......
    List<const CameraDeviceBase::PhysicalCameraSettingsList> metadataRequestList;
    std::list<const SurfaceMap> surfaceMapList;
    submitInfo->mRequestId = mRequestIdCounter;
    uint32_t loopCounter = 0;

    // 
    for (auto&& request: requests) {
        if (request.mIsReprocess) {
            if (!mInputStream.configured) {
                ALOGE("%s: Camera %s: no input stream is configured.", __FUNCTION__,
                        mCameraIdStr.string());
                return STATUS_ERROR_FMT(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "No input configured for camera %s but request is for reprocessing",
                        mCameraIdStr.string());
            } else if (streaming) {
                ALOGE("%s: Camera %s: streaming reprocess requests not supported.", __FUNCTION__,
                        mCameraIdStr.string());
                return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Repeating reprocess requests not supported");
            } else if (request.mPhysicalCameraSettings.size() > 1) {
                ALOGE("%s: Camera %s: reprocess requests not supported for "
                        "multiple physical cameras.", __FUNCTION__,
                        mCameraIdStr.string());
                return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Reprocess requests not supported for multiple cameras");
            }
        }

        if (request.mPhysicalCameraSettings.empty()) {
            ALOGE("%s: Camera %s: request doesn't contain any settings.", __FUNCTION__,
                    mCameraIdStr.string());
            return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                    "Request doesn't contain any settings");
        }

        //The first capture settings should always match the logical camera id
        String8 logicalId(request.mPhysicalCameraSettings.begin()->id.c_str());
        if (mDevice->getId() != logicalId) {
            ALOGE("%s: Camera %s: Invalid camera request settings.", __FUNCTION__,
                    mCameraIdStr.string());
            return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                    "Invalid camera request settings");
        }

        if (request.mSurfaceList.isEmpty() && request.mStreamIdxList.size() == 0) {
            ALOGE("%s: Camera %s: Requests must have at least one surface target. "
                    "Rejecting request.", __FUNCTION__, mCameraIdStr.string());
            return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                    "Request has no output targets");
        }

        /**
         * Write in the output stream IDs and map from stream ID to surface ID
         * which we calculate from the capture request's list of surface target
         */
        SurfaceMap surfaceMap;
        Vector<int32_t> outputStreamIds;
        std::vector<std::string> requestedPhysicalIds;
        if (request.mSurfaceList.size() > 0) {
            for (const sp<Surface>& surface : request.mSurfaceList) {
                if (surface == 0) continue;

                int32_t streamId;
                sp<IGraphicBufferProducer> gbp = surface->getIGraphicBufferProducer();
                res = insertGbpLocked(gbp, &surfaceMap, &outputStreamIds, &streamId);
                if (!res.isOk()) {
                    return res;
                }

                ssize_t index = mConfiguredOutputs.indexOfKey(streamId);
                if (index >= 0) {
                    String8 requestedPhysicalId(
                            mConfiguredOutputs.valueAt(index).getPhysicalCameraId());
                    requestedPhysicalIds.push_back(requestedPhysicalId.string());
                } else {
                    ALOGW("%s: Output stream Id not found among configured outputs!", __FUNCTION__);
                }
            }
        } else {
            for (size_t i = 0; i < request.mStreamIdxList.size(); i++) {
                int streamId = request.mStreamIdxList.itemAt(i);
                int surfaceIdx = request.mSurfaceIdxList.itemAt(i);

                ssize_t index = mConfiguredOutputs.indexOfKey(streamId);
                if (index < 0) {
                    ALOGE("%s: Camera %s: Tried to submit a request with a surface that"
                            " we have not called createStream on: stream %d",
                            __FUNCTION__, mCameraIdStr.string(), streamId);
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                            "Request targets Surface that is not part of current capture session");
                }

                const auto& gbps = mConfiguredOutputs.valueAt(index).getGraphicBufferProducers();
                if ((size_t)surfaceIdx >= gbps.size()) {
                    ALOGE("%s: Camera %s: Tried to submit a request with a surface that"
                            " we have not called createStream on: stream %d, surfaceIdx %d",
                            __FUNCTION__, mCameraIdStr.string(), streamId, surfaceIdx);
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                            "Request targets Surface has invalid surface index");
                }

                res = insertGbpLocked(gbps[surfaceIdx], &surfaceMap, &outputStreamIds, nullptr);
                if (!res.isOk()) {
                    return res;
                }

                String8 requestedPhysicalId(
                        mConfiguredOutputs.valueAt(index).getPhysicalCameraId());
                requestedPhysicalIds.push_back(requestedPhysicalId.string());
            }
        }

        CameraDeviceBase::PhysicalCameraSettingsList physicalSettingsList;
        for (const auto& it : request.mPhysicalCameraSettings) {
            if (it.settings.isEmpty()) {
                ALOGE("%s: Camera %s: Sent empty metadata packet. Rejecting request.",
                        __FUNCTION__, mCameraIdStr.string());
                return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Request settings are empty");
            }

            // Check whether the physical / logical stream has settings
            // consistent with the sensor pixel mode(s) it was configured with.
            // mCameraIdToStreamSet will only have ids that are high resolution
            const auto streamIdSetIt = mHighResolutionCameraIdToStreamIdSet.find(it.id);
            if (streamIdSetIt != mHighResolutionCameraIdToStreamIdSet.end()) {
                std::list<int> streamIdsUsedInRequest = getIntersection(streamIdSetIt->second,
                        outputStreamIds);
                if (!request.mIsReprocess &&
                        !isSensorPixelModeConsistent(streamIdsUsedInRequest, it.settings)) {
                     ALOGE("%s: Camera %s: Request settings CONTROL_SENSOR_PIXEL_MODE not "
                            "consistent with configured streams. Rejecting request.",
                            __FUNCTION__, it.id.c_str());
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Request settings CONTROL_SENSOR_PIXEL_MODE are not consistent with "
                        "streams configured");
                }
            }

            String8 physicalId(it.id.c_str());
            bool hasTestPatternModePhysicalKey = std::find(mSupportedPhysicalRequestKeys.begin(),
                    mSupportedPhysicalRequestKeys.end(), ANDROID_SENSOR_TEST_PATTERN_MODE) !=
                    mSupportedPhysicalRequestKeys.end();
            bool hasTestPatternDataPhysicalKey = std::find(mSupportedPhysicalRequestKeys.begin(),
                    mSupportedPhysicalRequestKeys.end(), ANDROID_SENSOR_TEST_PATTERN_DATA) !=
                    mSupportedPhysicalRequestKeys.end();
            if (physicalId != mDevice->getId()) {
                auto found = std::find(requestedPhysicalIds.begin(), requestedPhysicalIds.end(),
                        it.id);
                if (found == requestedPhysicalIds.end()) {
                    ALOGE("%s: Camera %s: Physical camera id: %s not part of attached outputs.",
                            __FUNCTION__, mCameraIdStr.string(), physicalId.string());
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                            "Invalid physical camera id");
                }

                if (!mSupportedPhysicalRequestKeys.empty()) {
                    // Filter out any unsupported physical request keys.
                    CameraMetadata filteredParams(mSupportedPhysicalRequestKeys.size());
                    camera_metadata_t *meta = const_cast<camera_metadata_t *>(
                            filteredParams.getAndLock());
                    set_camera_metadata_vendor_id(meta, mDevice->getVendorTagId());
                    filteredParams.unlock(meta);

                    for (const auto& keyIt : mSupportedPhysicalRequestKeys) {
                        camera_metadata_ro_entry entry = it.settings.find(keyIt);
                        if (entry.count > 0) {
                            filteredParams.update(entry);
                        }
                    }

                    physicalSettingsList.push_back({it.id, filteredParams,
                            hasTestPatternModePhysicalKey, hasTestPatternDataPhysicalKey});
                }
            } else {
                physicalSettingsList.push_back({it.id, it.settings});
            }
        }

        if (!enforceRequestPermissions(physicalSettingsList.begin()->metadata)) {
            // Callee logs
            return STATUS_ERROR(CameraService::ERROR_PERMISSION_DENIED,
                    "Caller does not have permission to change restricted controls");
        }

        physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_OUTPUT_STREAMS,
                &outputStreamIds[0], outputStreamIds.size());

        if (request.mIsReprocess) {
            physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_INPUT_STREAMS,
                    &mInputStream.id, 1);
        }

        physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_ID,
                &(submitInfo->mRequestId), /*size*/1);
        loopCounter++; // loopCounter starts from 1
        ALOGV("%s: Camera %s: Creating request with ID %d (%d of %zu)",
                __FUNCTION__, mCameraIdStr.string(), submitInfo->mRequestId,
                loopCounter, requests.size());

        metadataRequestList.push_back(physicalSettingsList);
        surfaceMapList.push_back(surfaceMap);
    }
    mRequestIdCounter++;

    if (streaming) {
        // 预览走这里
        err = mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList,
                &(submitInfo->mLastFrameNumber));
    } else {
        // 拍照走这里
        err = mDevice->captureList(metadataRequestList, surfaceMapList,
                &(submitInfo->mLastFrameNumber));
    }

    ALOGV("%s: Camera %s: End of function", __FUNCTION__, mCameraIdStr.string());
    return res;
}
status_t Camera3Device::setStreamingRequestList(
        const List<const PhysicalCameraSettingsList> &requestsList,
        const std::list<const SurfaceMap> &surfaceMaps, int64_t *lastFrameNumber) {
    ATRACE_CALL();
    
    return submitRequestsHelper(requestsList, surfaceMaps, /*repeating*/true, lastFrameNumber);
}


status_t Camera3Device::submitRequestsHelper(
        const List<const PhysicalCameraSettingsList> &requests,
        const std::list<const SurfaceMap> &surfaceMaps,
        bool repeating,
        /*out*/
        int64_t *lastFrameNumber) {
    // 预览走这里
    if (repeating) {
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }

    ......
    return res;
}


status_t Camera3Device::RequestThread::setRepeatingRequests(
        const RequestList &requests,
        /*out*/
        int64_t *lastFrameNumber) {
    ATRACE_CALL();
    Mutex::Autolock l(mRequestLock);
    if (lastFrameNumber != NULL) {
        *lastFrameNumber = mRepeatingLastFrameNumber;
    }
    mRepeatingRequests.clear();
    mFirstRepeating = true;
    // 将requests请求都给到mRepeatingRequests里面
    mRepeatingRequests.insert(mRepeatingRequests.begin(),
            requests.begin(), requests.end());

    unpauseForNewRequests();

    mRepeatingLastFrameNumber = hardware::camera2::ICameraDeviceUser::NO_IN_FLIGHT_REPEATING_FRAMES;
    return OK;
}

void Camera3Device::RequestThread::unpauseForNewRequests() {
    ATRACE_CALL();
    // With work to do, mark thread as unpaused.
    // If paused by request (setPaused), don't resume, to avoid
    // extra signaling/waiting overhead to waitUntilPaused
    // 主要是这里的signal然后,唤醒一个线程去进行发送请求给hal
    mRequestSignal.signal();
    Mutex::Autolock p(mPauseLock);
    if (!mDoPause) {
        ALOGV("%s: RequestThread: Going active", __FUNCTION__);
        if (mPaused) {
            sp<StatusTracker> statusTracker = mStatusTracker.promote();
            if (statusTracker != 0) {
                statusTracker->markComponentActive(mStatusId);
            }
        }
        mPaused = false;
    }
}

mRequestSignal.signal()要去唤醒之前跑起来阻塞住的线程

在这里插入图片描述
下发请求
在这里插入图片描述
Threadloop 开始工作
在这里插入图片描述
等待的线程被打断开始获取buffer,通过processBatchCaptureRequests下发请求到camera hal
在这里插入图片描述
通过onResultAvailable 通知app camera data准备好了

五、关闭预览

停止预览app会调用接口CameraDeviceImpl的stopRepeating接口,通过trace可以看到如下调用在这里插入图片描述
CameraDeviceImpl::stopRepeating -> CameraDeviceClient::cancelRequest -> Camera3Device::clearStreamingRequest -> Camera3Device::RequestThread::clearRepeatingRequests->
Camera3Device::RequestThread::clearRepeatingRequestsLocked
会在clearRepeatingRequestsLocked中clear mRepeatingRequests List, 然后mRepeatingRequests为0后
Camera3Device 的threadloop 就会走到
在这里插入图片描述
然后停止获取buffer向camera hal发送

六、关闭相机

在这里插入图片描述
app调用close接口关闭camera,调用时序如上,在cameraserver中会退出相关的threadloop线程,在camera hal会正式做camera ioctrl close 销毁ais client 对应的camera context

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2234093.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【Redis_Day3】Redis通用命令

【Redis_Day3】Redis通用命令 redis客户端的三种形态redis的快与慢redis通用命令阅读redis官方文档redis中两个核心命令set命令get命令 redis全局命令keys命令&#xff1a;查询当前服务器上的key生产环境 exists命令&#xff1a;判定key是否存在del命令&#xff1a;删除指定的k…

静态库、动态库、framework、xcframework、use_frameworks!的作用、关联核心SDK工程和测试(主)工程、设备CPU架构

1.1库的概念 库&#xff1a;程序代码的集合&#xff0c;编译好的二进制文件加上头文件供使用&#xff0c;共享程序代码的一种方式。 1.2库的分类 根据开源情况分为&#xff1a;开源库&#xff08;能看到具体实现&#xff09;、闭源库&#xff08;只公开调用的的接口&#xf…

小菜家教平台:基于SpringBoot+Vue打造一站式学习管理系统

前言 现在已经学习了很多与Java相关的知识&#xff0c;但是迟迟没有进行一个完整的实践&#xff08;之前这个项目开发到一半&#xff0c;很多东西没学搁置了&#xff0c;同时原先的项目中也有很多的问题&#xff09;&#xff0c;所以现在准备从零开始做一个基于SpringBootVue的…

【C++、数据结构】哈希表——散列表(一)(概念/总结)

「前言」 &#x1f308;个人主页&#xff1a; 代码探秘者 &#x1f308;C语言专栏&#xff1a;C语言 &#x1f308;C专栏&#xff1a; C / STL使用以及模拟实现 &#x1f308;数据结构专栏&#xff1a; 数据结构 / 十大排序算法 &#x1f308;Linux专栏&#xff1a; Linux系统编…

山东路远生态科技有限公司竣工投产仪式暨产品发布会圆满举行

第二十届三中全会于2024年7月15日至18日在北京举行。全会审议通过了《关于进一步全面深化改革、推进中国式现代化的决定》。其中提到,“要健全因地制宜发展新质生产力体制机制”。 新质生产力是由技术革命性突破、生产要素创新性配置、产业深度转型升级而催生的当代先进生产力…

MD5(Crypto)

解题思路 打开文件发现一串代码&#xff0c;结合题目提示&#xff0c;应该是 MD5 加密。 找个在线的 MD5 解密网站&#xff0c;行云流水得到 flag。 题目设计原理 题目设计&#xff1a;无他&#xff0c;MD5 加密。 题目原理&#xff1a; MD5&#xff08;Message-Digest Algo…

EHOME视频平台EasyCVR萤石设备视频接入平台视频诊断技术可以识别哪些视频质量问题?

EasyCVR视频监控汇聚管理平台是一款针对大中型项目设计的跨区域网络化视频监控集中管理平台。萤石设备视频接入平台EasyCVR不仅具备视频资源管理、设备管理、用户管理、运维管理和安全管理等功能&#xff0c;还支持多种主流标准协议&#xff0c;如GB28181、GB35114、RTSP/Onvif…

QML项目实战:自定义Button

目录 一.添加模块 ​1.QtQuick.Controls 2.1 2.QtGraphicalEffects 1.12 二.自定义Button 1.颜色背景设置 2.设置渐变色背景 3.文本设置 4.点击设置 5.阴影设置 三.效果 1.当enabled为true 2.按钮被点击时 3.当enabled为false 四.代码 一.添加模块 1.QtQuick.Con…

实战攻略 | ClickHouse优化之FINAL查询加速

【本文作者&#xff1a;擎创科技资深研发 禹鼎侯】 查询时为什么要加FINAL 我们在使用ClickHouse存储数据时&#xff0c;通常会有一些去重的需求&#xff0c;这时候我们可以使用ReplacingMergeTree引擎。这个引擎允许你存储重复数据&#xff0c;但是在merge的时候会根据order …

labview学习总结

labview学习总结 安装labview的特点一、图形化编程范式二、并行执行机制三、硬件集成能力四、应用领域优势五、开发效率六、系统集成能力**labview基本组成示意图****常用程序结构图解**结语 基础知识介绍界面前后面板的概念平铺式和层叠式 帧的概念结构类型顺序结构for循环whi…

Linux 服务器使用指南:从入门到登录

&#x1f31f;快来参与讨论&#x1f4ac;&#xff0c;点赞&#x1f44d;、收藏⭐、分享&#x1f4e4;&#xff0c;共创活力社区。 &#x1f31f; &#x1f6a9;博主致力于用通俗易懂且不失专业性的文字&#xff0c;讲解计算机领域那些看似枯燥的知识点&#x1f6a9; 目录 一…

《AI 大模型:重塑软件开发新未来》

引言 在科技的璀璨星河中&#xff0c;AI 大模型宛如一颗耀眼的新星&#xff0c;正以前所未有的力量改写着软件开发的篇章。随着其技术的持续演进&#xff0c;软件开发流程正经历着翻天覆地的变化。从代码自动生成的神奇魔法&#xff0c;到智能测试的精准洞察&#xff0c;AI 大…

acmessl.cn提供接口API方式申请免费ssl证书

目录 一、前沿 二、API接口文档 1、证书可申请列表 简要描述 请求URL 请求方式 返回参数说明 备注 2、证书申请 简要描述 请求URL 请求方式 业务参数 返回示例 返回参数说明 备注 3、证书查询 简要描述 请求URL 请求方式 业务参数 返回参数说明 备注 4、证…

windows server2019下载docker拉取redis等镜像并运行项目

一、基本概念 1、windows server 指由微软公司开发的“Windows”系列中的“服务器”版本。这意味着它是基于Windows操作系统的&#xff0c;但专门设计用于服务器环境&#xff0c;而不是普通的桌面或个人用户使用。主要用途包括服务器功能、用户和资源管理、虚拟化等 2、dock…

Docker-- cgroups资源控制实战

上一篇&#xff1a;容器化和虚拟化 什么是cgroups&#xff1f; cgroups是Linux内核中的一项功能&#xff0c;最初由Google的工程师提出&#xff0c;后来被整合进Linux内核; 它允许用户将一系列系统任务及其子任务整合或分隔到按资源划分等级的不同组内&#xff0c;从而为系统…

解决ImportError: DLL load failed while importing _message: 找不到指定的程序。

C:\software\Anoconda\envs\yolov5_train\python.exe C:\Project\13_yolov5-master\train.py C:\software\Anoconda\envs\yolov5_train\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: [WinError 127] 找不到指定的程序…

超越Axure:探索新一代原型设计工具

Axure RP是一款被广泛认可的快速原型设计工具&#xff0c;专为专业设计师打造&#xff0c;用于创建高效的产品原型图&#xff0c;包括APP和网页的原型图、框架图和结构图等。Axure RP制作的原型图能够实现与实际APP相似的交互效果&#xff0c;便于向用户或客户展示&#xff0c;…

综合项目--博客

一。基础配置&#xff1a; 1.配置主机名&#xff0c;静态IP地址 2.开启防火墙配置 3.部分开启selinux并且配置 4.服务器之间使用同ntp.aliyun.com进行世家能同步 5.服务器之间实现SSH绵密登陆 二。业务需求 1.Sever-NFS-DNS主机配置NFS服务器&#xff0c;将博客网站资源…

dns欺骗

[[Ettercap]] 少不了这个 arp 毒化和流量截取的中间人工具。 dns欺骗原理 什么是 DNS 欺骗&#xff1f; DNS 欺骗&#xff08;DNS Spoofing&#xff09; 是一种网络攻击技术&#xff0c;攻击者通过修改 DNS 响应&#xff0c;将目标用户的 DNS 查询结果篡改&#xff0c;指向攻…

危机来临前---- 力扣: 876

危机即将来临 – 链表的中间节点 描述&#xff1a; 给你单链表的头结点 head &#xff0c;请你找出并返回链表的中间结点。如果有两个中间结点&#xff0c;则返回第二个中间结点。 示例&#xff1a; 何解&#xff1f; 1、遍历找到中间节点 &#xff1a; 这个之在回文链表中找…