mediapipe流水线分析 二

news2024/11/27 19:54:15

目标检测 Graph

在这里插入图片描述

一 流水线上游输入处理

1 TfLiteConverterCalculator

将输入的数据转换成tensorflow api 支持的Tensor TfLiteTensor 并初始化相关输入输出节点 ,该类的业务主要通过 interpreter std::unique_ptrtflite::Interpreter interpreter_ = nullptr; 实现类 完成 数据在cpu/gpu 上的推理

1.1 TfLiteTensor /Tensor

Tensorflow

在TensorFlow Lite中,TfLiteTensor和Tensor是不同的概念。
Tensor是TensorFlow中的基本数据结构,用于表示多维数组。在TensorFlow Lite中,Tensor被用于输入和输出数据,以及在模型中表示变量和权重。
TfLiteTensor是TensorFlow Lite中特有的数据结构,它是对Tensor的封装,具有一些额外的属性和方法,用于支持TensorFlow Lite特定的功能和操作。例如,TfLiteTensor可以包含额外的信息,如quantization参数(用于量化)和维度(用于调整输入/输出的形状)。\

// A tensor in the interpreter system which is a wrapper around a buffer of
// data including a dimensionality (or NULL if not currently defined).
#ifndef TF_LITE_STATIC_MEMORY
typedef struct TfLiteTensor {
  // The data type specification for data stored in `data`. This affects
  // what member of `data` union should be used.
  TfLiteType type;
  // A union of data pointers. The appropriate type should be used for a typed
  // tensor based on `type`.
  TfLitePtrUnion data;
  // A pointer to a structure representing the dimensionality interpretation
  // that the buffer should have. NOTE: the product of elements of `dims`
  // and the element datatype size should be equal to `bytes` below.
  TfLiteIntArray* dims;
  // Quantization information.
  TfLiteQuantizationParams params;
  // How memory is mapped
  //  kTfLiteMmapRo: Memory mapped read only.
  //  i.e. weights
  //  kTfLiteArenaRw: Arena allocated read write memory
  //  (i.e. temporaries, outputs).
  TfLiteAllocationType allocation_type;
  // The number of bytes required to store the data of this Tensor. I.e.
  // (bytes of each element) * dims[0] * ... * dims[n-1].  For example, if
  // type is kTfLiteFloat32 and dims = {3, 2} then
  // bytes = sizeof(float) * 3 * 2 = 4 * 3 * 2 = 24.
  size_t bytes;

  // An opaque pointer to a tflite::MMapAllocation
  const void* allocation;

  // Null-terminated name of this tensor.
  const char* name;

  // The delegate which knows how to handle `buffer_handle`.
  // WARNING: This is an experimental interface that is subject to change.
  struct TfLiteDelegate* delegate;

  // An integer buffer handle that can be handled by `delegate`.
  // The value is valid only when delegate is not null.
  // WARNING: This is an experimental interface that is subject to change.
  TfLiteBufferHandle buffer_handle;

  // If the delegate uses its own buffer (e.g. GPU memory), the delegate is
  // responsible to set data_is_stale to true.
  // `delegate->CopyFromBufferHandle` can be called to copy the data from
  // delegate buffer.
  // WARNING: This is an // experimental interface that is subject to change.
  bool data_is_stale;

  // True if the tensor is a variable.
  bool is_variable;

  // Quantization information. Replaces params field above.
  TfLiteQuantization quantization;

  // Parameters used to encode a sparse tensor.
  // This is optional. The field is NULL if a tensor is dense.
  // WARNING: This is an experimental interface that is subject to change.
  TfLiteSparsity* sparsity;

  // Optional. Encodes shapes with unknown dimensions with -1. This field is
  // only populated when unknown dimensions exist in a read-write tensor (i.e.
  // an input or output tensor). (e.g.  `dims` contains [1, 1, 1, 3] and
  // `dims_signature` contains [1, -1, -1, 3]). Note that this field only
  // exists when TF_LITE_STATIC_MEMORY is not defined.
  const TfLiteIntArray* dims_signature;
} TfLiteTensor;

在TensorFlow Lite中,通过interpreter_->tensor(index)方法获取TfLiteTensor对象,其中index是>输入或输出张量的索引。

例如,以下是使用TensorFlow Lite C API获取输入张量的示例代码:

const int tensor_index = interpreter_->inputs()[0];  
TfLiteTensor* tensor = interpreter_->tensor(tensor_index);

通过interpreter_->ResizeInputTensor(index, shape)方法可以调整输入张量的形状,其中index是输入张量的索引,shape是新的形状。例如,以下是使用TensorFlow Lite C API调整输入张量形状的示例代码:

const int tensor_index = interpreter_->inputs()[0];  
interpreter_->ResizeInputTensor(tensor_index, {height, width, channels});

1.2 Tensor

Tensor是TensorFlow中的基本数据结构,用于表示多维数组。在TensorFlow Lite中,Tensor被用于输入和输出数据,以及在模型中表示变量和权重。

在 TensorFlow 中,张量可以通过多种方式创建和操作,例如使用 Python 列表或 NumPy 数组创建张量,或者通过 TensorFlow 提供的各种操作来创建和操作张量。张量的形状可以是任意维度的,例如一维、二维、三维等等。用于各种深度学习任务,例如图像识别、语音识别、自然语言处理等等。TensorFlow 还支持各种不同的硬件和操作系统,可以在各种平台上运行,包括 CPU、GPU、TPU 等等

Tensor的数据结构包括以下方面:

  1. 张量的形状:张量的形状定义了它的大小和维度,例如一个三维张量的形状可以是(10, 20, 30),表示它有10个长度为20的数组,每个数组包含30个元素。
  2. 张量的数据类型:张量的数据类型定义了它存储的数据类型,例如float32、int32等。
  3. 张量的值:张量的值存储在连续的内存中,可以通过索引来访问和修改。
/* Copyright 2015 The TensorFlow Authors. All Rights Reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/

#ifndef TENSORFLOW_CORE_FRAMEWORK_TENSOR_H_
#define TENSORFLOW_CORE_FRAMEWORK_TENSOR_H_

#include <cstdint>
#include <iosfwd>
#include <string>
#include <type_traits>
#include <utility>

#include "unsupported/Eigen/CXX11/Tensor"  // from @eigen_archive
#include "tensorflow/core/framework/allocator.h"
#include "tensorflow/core/framework/tensor_shape.h"
#include "tensorflow/core/framework/tensor_types.h"
#include "tensorflow/core/framework/types.h"
#include "tensorflow/core/framework/types.pb.h"
#include "tensorflow/core/lib/core/refcount.h"
#include "tensorflow/core/lib/core/status.h"
#include "tensorflow/core/lib/core/stringpiece.h"
#include "tensorflow/core/lib/gtl/inlined_vector.h"
#include "tensorflow/core/platform/mem.h"
#include "tensorflow/core/platform/types.h"

namespace tensorflow {

// Forward declarations.  In particular, we forward declare protos so that their
// symbols can be removed from .so exports.
class AllocationDescription;
class OpKernelContext;
class Tensor;
class TensorBuffer;
class TensorCApi;
class TensorInterface;
class TensorCord;
class TensorDescription;
class TensorProto;
class Var;

namespace batch_util {
Status CopyElementToSlice(Tensor element, Tensor* parent, int64_t index);
Status CopySliceToElement(const Tensor& parent, Tensor* element, int64_t index);
Status MaybeMoveSliceToElement(Tensor* parent, Tensor* element, int64_t index);
Status CopyContiguousSlices(const Tensor& src, int64_t src_offset,
                            int64_t dst_offset, int64_t num_slices,
                            Tensor* dst);
}  // namespace batch_util

/// @ingroup core

/// Interface to access the raw ref-counted data buffer.
class TensorBuffer : public core::RefCounted {
 public:
  explicit TensorBuffer(void* data_ptr) : data_(data_ptr) {}
  ~TensorBuffer() override {}

  /// \brief data() points to a memory region of size() bytes.
  ///
  /// NOTE(mrry): The `data()` method is not virtual for performance reasons.
  /// It can be called multiple times when the contents of a `Tensor` are
  /// accessed, and so making it non-virtual allows the body to be inlined.
  void* data() const { return data_; }

  /// \brief Size (in bytes) of the buffer.
  virtual size_t size() const = 0;

  /// \brief If this TensorBuffer is sub-buffer of another TensorBuffer,
  /// returns that TensorBuffer. Otherwise, returns this.
  virtual TensorBuffer* root_buffer() = 0;

  /// \brief Fills metadata about the allocation into the proto.
  virtual void FillAllocationDescription(
      AllocationDescription* proto) const = 0;

  virtual bool GetAllocatedBytes(size_t* out_bytes) const;

  /// \brief Helper method to reinterpret the buffer as an array of `T`.
  template <typename T>
  T* base() const {
    return reinterpret_cast<T*>(data());
  }

  /// \brief Whether this TensorBuffer owns the underlying memory.
  virtual bool OwnsMemory() const { return true; }

  /// \brief The type of the underlying memory.
  virtual AllocatorMemoryType GetMemoryType() const {
    return AllocatorMemoryType::kUnknown;
  }

 private:
  void* const data_;
};

/// Represents an n-dimensional array of values.
class Tensor {
 public:
  /// \brief Creates a 1-dimensional, 0-element float tensor.
  ///
  /// The returned Tensor is not a scalar (shape {}), but is instead
  /// an empty one-dimensional Tensor (shape {0}, NumElements() ==
  /// 0). Since it has no elements, it does not need to be assigned a
  /// value and is initialized by default (IsInitialized() is
  /// true). If this is undesirable, consider creating a one-element
  /// scalar which does require initialization:
  ///
  /// ```c++
  ///
  ///     Tensor(DT_FLOAT, TensorShape({}))
  ///
  /// ```
  Tensor();

  /// \brief Creates a Tensor of the given `type` and `shape`.  If
  /// LogMemory::IsEnabled() the allocation is logged as coming from
  /// an unknown kernel and step. Calling the Tensor constructor
  /// directly from within an Op is deprecated: use the
  /// OpKernelConstruction/OpKernelContext allocate_* methods to
  /// allocate a new tensor, which record the kernel and step.
  ///
  /// The underlying buffer is allocated using a `CPUAllocator`.
  Tensor(DataType type, const TensorShape& shape);

  /// \brief Creates a tensor with the input `type` and `shape`, using
  /// the allocator `a` to allocate the underlying buffer. If
  /// LogMemory::IsEnabled() the allocation is logged as coming from
  /// an unknown kernel and step. Calling the Tensor constructor
  /// directly from within an Op is deprecated: use the
  /// OpKernelConstruction/OpKernelContext allocate_* methods to
  /// allocate a new tensor, which record the kernel and step.
  ///
  /// `a` must outlive the lifetime of this Tensor.
  Tensor(Allocator* a, DataType type, const TensorShape& shape);

  /// \brief Creates a tensor with the input `type` and `shape`, using
  /// the allocator `a` and the specified "allocation_attr" to
  /// allocate the underlying buffer. If the kernel and step are known
  /// allocation_attr.allocation_will_be_logged should be set to true
  /// and LogMemory::RecordTensorAllocation should be called after the
  /// tensor is constructed. Calling the Tensor constructor directly
  /// from within an Op is deprecated: use the
  /// OpKernelConstruction/OpKernelContext allocate_* methods to
  /// allocate a new tensor, which record the kernel and step.
  ///
  /// `a` must outlive the lifetime of this Tensor.
  Tensor(Allocator* a, DataType type, const TensorShape& shape,
         const AllocationAttributes& allocation_attr);

  /// \brief Creates a tensor with the input datatype, shape and buf.
  ///
  /// Acquires a ref on buf that belongs to this Tensor.
  Tensor(DataType type, const TensorShape& shape, TensorBuffer* buf);

  /// \brief Creates a tensor with the input datatype, shape and buf.
  ///
  /// Takes an ownership of the bufffer from the reference counted pointer.
  Tensor(DataType type, TensorShape shape, core::RefCountPtr<TensorBuffer> buf);

  /// \brief Creates an empty Tensor of the given data type.
  ///
  /// Like Tensor(), returns a 1-dimensional, 0-element Tensor with
  /// IsInitialized() returning True. See the Tensor() documentation
  /// for details.
  explicit Tensor(DataType type);

  /// \brief Initializes a tensor with the input `type` and `shape`, or returns
  /// an error and leaves `out_tensor` unmodified. This factory method should be
  /// used instead of the corresponding constructor if calling code cannot
  /// validate that the `DataType` is valid and supported.
  ///
  /// The underlying buffer is allocated using a `CPUAllocator`.
  static Status BuildTensor(DataType type, const TensorShape& shape,
                            Tensor* out_tensor);

 private:
  // A tag type for selecting the `Tensor` constructor overload that creates a
  // scalar tensor in host memory.
  struct host_scalar_tag {};

  class HostScalarTensorBufferBase;
  template <typename T>
  struct ValueAndTensorBuffer;

  // Creates a tensor with the given scalar `value` in CPU memory.
  template <typename T>
  Tensor(T value, host_scalar_tag tag);

 public:
  // A series of specialized constructors for scalar tensors in host memory.
  //
  // NOTE: The `Variant` host-scalar constructor is not defined, because Variant
  // is implicitly constructible from many different types, and this causes
  // ambiguities with some compilers.
  explicit Tensor(float scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(double scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(int32_t scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(uint32 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(uint16 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(uint8 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(int16_t scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(int8_t scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(tstring scalar_value)
      : Tensor(std::move(scalar_value), host_scalar_tag{}) {}
  explicit Tensor(complex64 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(complex128 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(int64_t scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(uint64 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(bool scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(qint8 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(quint8 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(qint16 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(quint16 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(qint32 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(bfloat16 scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(Eigen::half scalar_value)
      : Tensor(scalar_value, host_scalar_tag{}) {}
  explicit Tensor(ResourceHandle scalar_value)
      : Tensor(std::move(scalar_value), host_scalar_tag{}) {}

  // NOTE: The `const char*` host-scalar constructor is provided as a
  // convenience because otherwise passing a string literal would surprisingly
  // construct a DT_BOOL tensor.
  explicit Tensor(const char* scalar_value)
      : Tensor(tstring(scalar_value), host_scalar_tag{}) {}

  /// Copy constructor.
  Tensor(const Tensor& other);

  /// \brief Move constructor. After this call, <other> is safely destructible
  /// can be assigned to, and IsInitialized() can be called and will return
  /// false. Other calls on <other> (e.g. shape manipulation) are not valid.
  Tensor(Tensor&& other);

  // Explicitly delete constructor that take a pointer (except char*)
  // so that the pointer doesn't get implicitly cast to bool.
  template <typename T, typename std::enable_if<!std::is_same<T, char>::value,
                                                T>::type* = nullptr>
  explicit Tensor(T* t) = delete;

  ~Tensor();

  // I/O operators.
  friend std::ostream&  // NOLINT: iosfwd
  operator<<(std::ostream& out, const Tensor& tensor);

  /// Returns the data type.
  DataType dtype() const { return shape_.data_type(); }

  /// Returns the shape of the tensor.
  const TensorShape& shape() const { return shape_; }

  /// \brief Convenience accessor for the tensor shape.
  ///
  /// For all shape accessors, see comments for relevant methods of
  /// `TensorShape` in `tensor_shape.h`.
  int dims() const { return shape().dims(); }

  /// Convenience accessor for the tensor shape.
  int64_t dim_size(int d) const { return shape().dim_size(d); }

  /// Convenience accessor for the tensor shape.
  int64_t NumElements() const { return shape().num_elements(); }

  bool IsSameSize(const Tensor& b) const {
    return shape().IsSameSize(b.shape());
  }

  // True iff the two tensors use the same underlying refcounted storage
  bool SharesBufferWith(const Tensor& b) const;

  /// \brief If necessary, has this Tensor been initialized?
  ///
  /// Zero-element Tensors are always considered initialized, even if they
  /// have never been assigned to and do not have any memory allocated.
  bool IsInitialized() const;

  /// Returns the estimated memory usage of this tensor.
  size_t TotalBytes() const;

  // Returns the size of allocated memory for this tensor.
  size_t AllocatedBytes() const;

  /// Returns true iff this tensor is aligned.
  bool IsAligned() const {
#if EIGEN_MAX_ALIGN_BYTES == 0
    return true;
#else
    void* ptr = base<void>();
    return dtype() == DT_STRING || NumElements() == 0 ||
           (reinterpret_cast<intptr_t>(ptr) % EIGEN_MAX_ALIGN_BYTES == 0);
#endif
  }

  /// Assign operator. This tensor shares other's underlying storage.
  Tensor& operator=(const Tensor& other) {
    CopyFromInternal(other, other.shape());
    return *this;
  }

  /// Move operator.  See move constructor for details.
  Tensor& operator=(Tensor&& other);

  /// \brief Copy the other tensor into this tensor and reshape it.
  ///
  /// This tensor shares other's underlying storage. Returns `true`
  /// iff `other.shape()` has the same number of elements of the given
  /// `shape`.
  bool CopyFrom(const Tensor& other,
                const TensorShape& shape) TF_MUST_USE_RESULT {
    if (other.NumElements() != shape.num_elements()) return false;
    CopyFromInternal(other, shape);
    return true;
  }

  /// \brief Slice this tensor along the 1st dimension.

  /// I.e., the returned tensor satisfies
  ///     returned[i, ...] == this[dim0_start + i, ...].
  /// The returned tensor shares the underlying tensor buffer with this
  /// tensor.
  ///
  /// NOTE: The returned tensor may not satisfy the same alignment
  /// requirement as this tensor depending on the shape. The caller
  /// must check the returned tensor's alignment before calling certain
  /// methods that have alignment requirement (e.g., `flat()`, `tensor()`).
  ///
  /// NOTE: When fed with an N-dimensional tensor, this method returns a tensor
  /// also with N dimensions. If you want to select a sub tensor, see SubSlice.
  ///
  /// REQUIRES: `dims()` >= 1
  /// REQUIRES: `0 <= dim0_start <= dim0_limit <= dim_size(0)`
  Tensor Slice(int64_t dim0_start, int64_t dim0_limit) const;

  /// \brief Select a subslice from this tensor along the 1st dimension.
  ///
  /// When fed with an N-dimensional tensor, this method returns a tensor with
  /// N-1 dimensions, where the returned tensor is a subslice of the input
  /// tensor along the first dimension. The N-1 dimensions of the returned
  /// tensor are the last N-1 dimensions of the input tensor.
  ///
  /// NOTE: The returned tensor may not satisfy the same alignment
  /// requirement as this tensor depending on the shape. The caller
  /// must check the returned tensor's alignment before calling certain
  /// methods that have alignment requirement (e.g., `flat()`, `tensor()`).
  ///
  /// REQUIRES: `dims()` >= 1
  /// REQUIRES: `0 <= index < dim_size(0)`
  Tensor SubSlice(int64_t index) const;

  /// \brief Parse `other` and construct the tensor.

  /// Returns `true` iff the parsing succeeds. If the parsing fails,
  /// the state of `*this` is unchanged.
  bool FromProto(const TensorProto& other) TF_MUST_USE_RESULT;
  bool FromProto(Allocator* a, const TensorProto& other) TF_MUST_USE_RESULT;

  /// \brief Fills in `proto` with `*this` tensor's content.
  ///
  /// `AsProtoField()` fills in the repeated field for `proto.dtype()`, while
  /// `AsProtoTensorContent()` encodes the content in `proto.tensor_content()`
  /// in a compact form.
  void AsProtoField(TensorProto* proto) const;
  void AsProtoTensorContent(TensorProto* proto) const;

  /// \brief Return the tensor data as an `Eigen::Tensor` with the type and
  /// sizes of this `Tensor`.
  ///
  /// Use these methods when you know the data type and the number of
  /// dimensions of the Tensor and you want an `Eigen::Tensor`
  /// automatically sized to the `Tensor` sizes. The implementation check
  /// fails if either type or sizes mismatch.
  ///
  /// Example:
  ///
  /// ```c++
  ///
  ///     typedef float T;
  ///     Tensor my_mat(...built with Shape{rows: 3, cols: 5}...);
  ///     auto mat = my_mat.matrix<T>();    // 2D Eigen::Tensor, 3 x 5.
  ///     auto mat = my_mat.tensor<T, 2>(); // 2D Eigen::Tensor, 3 x 5.
  ///     auto vec = my_mat.vec<T>();       // CHECK fails as my_mat is 2D.
  ///     auto vec = my_mat.tensor<T, 3>(); // CHECK fails as my_mat is 2D.
  ///     auto mat = my_mat.matrix<int32>();// CHECK fails as type mismatch.
  ///
  /// ```
  template <typename T>
  typename TTypes<T>::Vec vec() {
    return tensor<T, 1>();
  }

  template <typename T>
  typename TTypes<T>::Matrix matrix() {
    return tensor<T, 2>();
  }

  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::Tensor tensor() TF_ATTRIBUTE_NOINLINE;

  /// \brief Return the tensor data to an `Eigen::Tensor` with the
  /// same size but a bitwise cast to the specified dtype `T`.
  ///
  /// Using a bitcast is useful for move and copy operations.
  /// NOTE: this is the same as `tensor()` except a bitcast is allowed.
  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::Tensor bit_casted_tensor();

  /// \brief Return the tensor data to an `Eigen::Tensor` with the
  /// last dimension elements converted into single elements of a larger type.
  ///
  /// For example, this is useful for kernels that can treat NCHW_VECT_C int8
  /// tensors as NCHW int32 tensors. The sizeof(T) should equal the size of
  /// the original element type * num elements in the original last dimension.
  /// NDIMS should be 1 less than the original number of dimensions.
  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::Tensor reinterpret_last_dimension();

  /// \brief Return the tensor data as an `Eigen::Tensor` of the data type and a
  /// specified shape.
  ///
  /// These methods allow you to access the data with the dimensions
  /// and sizes of your choice.  You do not need to know the number of
  /// dimensions of the Tensor to call them.  However, they `CHECK` that
  /// the type matches and the dimensions requested creates an
  /// `Eigen::Tensor` with the same number of elements as the tensor.
  ///
  /// Example:
  ///
  /// ```c++
  ///
  ///     typedef float T;
  ///     Tensor my_ten(...built with Shape{planes: 4, rows: 3, cols: 5}...);
  ///     // 1D Eigen::Tensor, size 60:
  ///     auto flat = my_ten.flat<T>();
  ///     // 2D Eigen::Tensor 12 x 5:
  ///     auto inner = my_ten.flat_inner_dims<T>();
  ///     // 2D Eigen::Tensor 4 x 15:
  ///     auto outer = my_ten.shaped<T, 2>({4, 15});
  ///     // CHECK fails, bad num elements:
  ///     auto outer = my_ten.shaped<T, 2>({4, 8});
  ///     // 3D Eigen::Tensor 6 x 5 x 2:
  ///     auto weird = my_ten.shaped<T, 3>({6, 5, 2});
  ///     // CHECK fails, type mismatch:
  ///     auto bad   = my_ten.flat<int32>();
  ///
  /// ```
  template <typename T>
  typename TTypes<T>::Flat flat();

  template <typename T>
  typename TTypes<T>::UnalignedFlat unaligned_flat() {
    return unaligned_shaped<T, 1>({NumElements()});
  }

  /// Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing all
  /// Tensor dimensions but the last NDIMS-1 into the first dimension of the
  /// result. If NDIMS > dims() then leading dimensions of size 1 will be
  /// added to make the output rank NDIMS.
  template <typename T, size_t NDIMS = 2>
  typename TTypes<T, NDIMS>::Tensor flat_inner_dims();

  /// Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing all
  /// Tensor dimensions but the first NDIMS-1 into the last dimension of the
  /// result. If NDIMS > dims() then trailing dimensions of size 1 will be
  /// added to make the output rank NDIMS.
  template <typename T, size_t NDIMS = 2>
  typename TTypes<T, NDIMS>::Tensor flat_outer_dims();

  /// Returns the data as an Eigen::Tensor with NDIMS dimensions, collapsing the
  /// first 'begin' Tensor dimensions into the first dimension of the result and
  /// the Tensor dimensions of the last dims() - 'begin' - NDIMS into the last
  /// dimension of the result. If 'begin' < 0 then the |'begin'| leading
  /// dimensions of size 1 will be added. If 'begin' + NDIMS > dims() then
  /// 'begin' + NDIMS - dims() trailing dimensions of size 1 will be added.
  template <typename T, size_t NDIMS = 3>
  typename TTypes<T, NDIMS>::Tensor flat_inner_outer_dims(int64_t begin);

  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::Tensor shaped(gtl::ArraySlice<int64_t> new_sizes);

  /// \brief Return the tensor data to an `Eigen::Tensor` with the new
  /// shape specified in `new_sizes` and cast to a new dtype `T`.
  ///
  /// Using a bitcast is useful for move and copy operations.
  /// The allowed bitcast is the only difference from `shaped()`.
  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::Tensor bit_casted_shaped(
      gtl::ArraySlice<int64_t> new_sizes);

  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::UnalignedTensor unaligned_shaped(
      gtl::ArraySlice<int64_t> new_sizes);

  /// \brief Return the Tensor data as a `TensorMap` of fixed size 1:
  /// `TensorMap<TensorFixedSize<T, 1>>`.

  /// Using `scalar()` allows the compiler to perform optimizations as
  /// the size of the tensor is known at compile time.
  template <typename T>
  typename TTypes<T>::Scalar scalar();

  /// Const versions of all the methods above.
  template <typename T>
  typename TTypes<T>::ConstVec vec() const {
    return tensor<T, 1>();
  }

  template <typename T>
  typename TTypes<T>::ConstMatrix matrix() const {
    return tensor<T, 2>();
  }

  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::ConstTensor tensor() const TF_ATTRIBUTE_NOINLINE;

  /// \brief Return the tensor data to an `Eigen::Tensor` with the
  /// same size but a bitwise cast to the specified dtype `T`.
  ///
  /// Using a bitcast is useful for move and copy operations.
  /// NOTE: this is the same as `tensor()` except a bitcast is allowed.
  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::ConstTensor bit_casted_tensor() const;

  /// \brief Return the tensor data to an `Eigen::Tensor` with the
  /// last dimension elements converted into single elements of a larger type.
  ///
  /// For example, this is useful for kernels that can treat NCHW_VECT_C int8
  /// tensors as NCHW int32 tensors. The sizeof(T) should equal the size of
  /// the original element type * num elements in the original last dimension.
  /// NDIMS should be 1 less than the original number of dimensions.
  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::ConstTensor reinterpret_last_dimension() const;

  template <typename T>
  typename TTypes<T>::ConstFlat flat() const;

  template <typename T>
  typename TTypes<T>::UnalignedConstFlat unaligned_flat() const {
    return unaligned_shaped<T, 1>({NumElements()});
  }

  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::ConstTensor shaped(
      gtl::ArraySlice<int64_t> new_sizes) const;

  /// \brief Return the tensor data to an `Eigen::Tensor` with the new
  /// shape specified in `new_sizes` and cast to a new dtype `T`.
  ///
  /// Using a bitcast is useful for move and copy operations.
  /// The allowed bitcast is the only difference from `shaped()`.
  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::ConstTensor bit_casted_shaped(
      gtl::ArraySlice<int64_t> new_sizes) const;

  template <typename T, size_t NDIMS>
  typename TTypes<T, NDIMS>::UnalignedConstTensor unaligned_shaped(
      gtl::ArraySlice<int64_t> new_sizes) const;

  template <typename T>
  typename TTypes<T>::ConstScalar scalar() const;

  template <typename T, size_t NDIMS = 2>
  typename TTypes<T, NDIMS>::ConstTensor flat_inner_dims() const;

  template <typename T, size_t NDIMS = 2>
  typename TTypes<T, NDIMS>::ConstTensor flat_outer_dims() const;

  template <typename T, size_t NDIMS = 3>
  typename TTypes<T, NDIMS>::ConstTensor flat_inner_outer_dims(
      int64_t begin) const;

  /// Render the first `max_entries` values in `*this` into a string.
  std::string SummarizeValue(int64_t max_entries, bool print_v2 = false) const;

  /// A human-readable summary of the tensor suitable for debugging.
  // `num_values` is the number of actual data values in the tensor
  // included in the message. If the tensor might be resident in
  // GPU/TPU memory use DeviceSafeDebugString instead.
  std::string DebugString(int num_values) const;
  std::string DebugString() const { return DebugString(3); }

  // Variant of DebugString() that should be used for possibly non-CPU tensors.
  // If the tensor is not resident on CPU, we can't read its values as
  // DebugString() does.
  std::string DeviceSafeDebugString() const;

  /// Fill in the `TensorDescription` proto with metadata about the
  /// tensor that is useful for monitoring and debugging.
  void FillDescription(TensorDescription* description) const;

  /// \brief Returns a `StringPiece` mapping the current tensor's buffer.
  ///
  /// The returned `StringPiece` may point to memory location on devices
  /// that the CPU cannot address directly.
  ///
  /// NOTE: The underlying tensor buffer is refcounted, so the lifetime
  /// of the contents mapped by the `StringPiece` matches the lifetime of
  /// the buffer; callers should arrange to make sure the buffer does
  /// not get destroyed while the `StringPiece` is still used.
  ///
  /// REQUIRES: `DataTypeCanUseMemcpy(dtype())`.
  StringPiece tensor_data() const;
  void* data() const;

  /// Copy the other tensor into this tensor, reshape it and reinterpret the
  /// buffer's datatype. If an ok Status is returned, the two tensors now share
  /// the same underlying storage.
  ///
  /// This call requires that the `other` tensor and the given type and shape
  /// are "compatible" (i.e. they occupy the same number of bytes).
  ///
  /// Specifically:
  ///
  /// shape.num_elements() * DataTypeSize(type)
  ///
  /// must equal
  ///
  /// other.num_elements() * DataTypeSize(other.dtype())
  ///
  /// In addition, this function requires:
  ///   * DataTypeSize(other.dtype()) != 0
  ///   * DataTypeSize(type) != 0
  ///
  /// If any of the requirements are not met, errors::InvalidArgument is
  /// returned.
  Status BitcastFrom(const Tensor& other, DataType dtype,
                     const TensorShape& shape);

  /// Like BitcastFrom, but CHECK fails if any preconditions are not met.
  ///
  /// Deprecated. Use BitcastFrom instead and check the returned Status.
  void UnsafeCopyFromInternal(const Tensor& other, DataType dtype,
                              const TensorShape& shape) {
    TF_CHECK_OK(BitcastFrom(other, dtype, shape));
  }

  // Returns true if the refcount on buf_ and any possible underlying root
  // buffer is one.
  bool RefCountIsOne() const;

  // Experimental. Returns the refcount on buf_ if it points to a regular
  // TensorBuffer. If buf_ points to a SubBuffer, returns -1.
  int RefCount() const;

  // Returns the type of the underlying memory.
  AllocatorMemoryType GetMemoryType() const { return buf_->GetMemoryType(); }

 private:
  void CheckType(DataType expected_dtype) const;
  void CheckTypeAndIsAligned(DataType expected_dtype) const;
  void CheckIsAlignedAndSingleElement() const;
  void set_dtype(DataType t) { shape_.set_data_type(t); }

  // TensorShape's InlineVector.
  static gtl::InlinedVector<int64_t, 4> ComputeFlatInnerDims(
      gtl::ArraySlice<int64_t> orig, int64_t num_out_dims);
  static gtl::InlinedVector<int64_t, 4> ComputeFlatOuterDims(
      gtl::ArraySlice<int64_t> orig, int64_t num_out_dims);

  TensorShape shape_;
  TensorBuffer* buf_;

  friend class DMAHelper;             // For access to buf_.
  friend class TensorCApi;            // For access to buf_.
  friend class TensorCord;            // For access to buf_.
  friend class TensorReference;       // For access to buf_.
  friend class VariableOp;            // For access to set_shape.
  friend class AutoReloadVariableOp;  // For access to set_shape.
  friend class TensorTestHelper;      // For access to set_shape.
  friend class TensorInterface;       // For access to set_shape.
  friend class CastOpBase;            // For access to set_dtype.
  friend class ScopedAllocator;       // For access to buf_.
  friend class PjRtTensorBufferUtil;  // For access to buf_.
  friend Status batch_util::CopyElementToSlice(
      Tensor element, Tensor* parent,
      int64_t index);  // For access to base<T>().
  friend Status batch_util::CopySliceToElement(
      const Tensor& parent, Tensor* element,
      int64_t index);  // For access to base<T>().
  friend Status batch_util::MaybeMoveSliceToElement(
      Tensor* parent, Tensor* element,
      int64_t index);  // For access to base<T>().
  friend Status batch_util::CopyContiguousSlices(
      const Tensor& src, int64_t src_offset, int64_t dst_offset,
      int64_t num_slices,
      Tensor* dst);  // For access to base<T>().

  bool CanUseDMA() const;

  // Only needed by variable op to set the shape of an uninitialized
  // Tensor.
  // TODO: Remove this when we have a better story for detecting
  // uninitialized tensors.
  void set_shape(const TensorShape& shape) {
    DataType dt = dtype();
    shape_ = shape;
    set_dtype(dt);
  }

  inline void CopyFromInternal(const Tensor& other, const TensorShape& shape) {
    DCHECK_EQ(shape.num_elements(), other.NumElements());
    // Data type will be overwritten if this == &other, since dtype is part of
    // shape.
    DataType other_dtype = other.dtype();
    shape_ = shape;
    set_dtype(other_dtype);
    if (buf_ != other.buf_) {
      if (buf_) buf_->Unref();
      buf_ = other.buf_;
      if (buf_) buf_->Ref();
    }
  }

  template <typename T>
  T* base() const;

  template <size_t NDIMS>
  void FillDimsAndValidateCompatibleShape(
      gtl::ArraySlice<int64_t> new_sizes,
      Eigen::array<Eigen::DenseIndex, NDIMS>* dims) const;

  template <typename T, size_t NDIMS>
  void FillDimsAndValidateCompatibleShape(
      gtl::ArraySlice<int64_t> new_sizes,
      Eigen::array<Eigen::DenseIndex, NDIMS>* dims) const;
};

// Implementation details

// START_SKIP_DOXYGEN

template <typename T>
T* Tensor::base() const {
  return buf_ == nullptr ? nullptr : buf_->base<T>();
}

// This routine is defined out of line for code-space savings
template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::tensor() {
  CheckTypeAndIsAligned(DataTypeToEnum<T>::v());
  return typename TTypes<T, NDIMS>::Tensor(base<T>(),
                                           shape().AsEigenDSizes<NDIMS>());
}

// This routine is defined out of line for code-space savings
template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::tensor() const {
  CheckTypeAndIsAligned(DataTypeToEnum<T>::v());
  return typename TTypes<T, NDIMS>::ConstTensor(base<const T>(),
                                                shape().AsEigenDSizes<NDIMS>());
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::bit_casted_tensor() {
  CHECK(IsAligned());
  return typename TTypes<T, NDIMS>::Tensor(base<T>(),
                                           shape().AsEigenDSizes<NDIMS>());
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::bit_casted_tensor() const {
  CHECK(IsAligned());
  return typename TTypes<T, NDIMS>::ConstTensor(base<const T>(),
                                                shape().AsEigenDSizes<NDIMS>());
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::reinterpret_last_dimension() {
  if (NDIMS == dims()) {
    return tensor<T, NDIMS>();
  }
  CHECK(IsAligned());
  CHECK_EQ(static_cast<int>(NDIMS), dims() - 1);
  CHECK_EQ(static_cast<int64_t>(sizeof(T)),
           shape_.dim_sizes()[NDIMS] * DataTypeSize(dtype()));
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  for (int d = 0; d < NDIMS; ++d) {
    dims[d] = shape_.dim_sizes()[d];
  }
  return typename TTypes<T, NDIMS>::Tensor(base<T>(), dims);
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::reinterpret_last_dimension()
    const {
  if (NDIMS == dims()) {
    return tensor<T, NDIMS>();
  }
  CHECK(IsAligned());
  CHECK_EQ(static_cast<int>(NDIMS), dims() - 1);
  CHECK_EQ(static_cast<int64_t>(sizeof(T)),
           shape_.dim_sizes()[NDIMS] * DataTypeSize(dtype()));
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  for (int d = 0; d < NDIMS; ++d) {
    dims[d] = shape_.dim_sizes()[d];
  }
  return typename TTypes<T, NDIMS>::ConstTensor(base<const T>(), dims);
}

template <size_t NDIMS>
void Tensor::FillDimsAndValidateCompatibleShape(
    gtl::ArraySlice<int64_t> new_sizes,
    Eigen::array<Eigen::DenseIndex, NDIMS>* dims) const {
  CHECK_EQ(NDIMS, new_sizes.size());
  int64_t new_num_elements = 1;
  for (size_t d = 0; d < NDIMS; d++) {
    new_num_elements *= new_sizes[d];
    (*dims)[d] = new_sizes[d];
  }
  CHECK_EQ(new_num_elements, NumElements());
}

template <typename T, size_t NDIMS>
void Tensor::FillDimsAndValidateCompatibleShape(
    gtl::ArraySlice<int64_t> new_sizes,
    Eigen::array<Eigen::DenseIndex, NDIMS>* dims) const {
  CHECK_EQ(NDIMS, new_sizes.size());
  int64_t new_num_elements = 1;
  for (size_t d = 0; d < NDIMS; d++) {
    new_num_elements *= new_sizes[d];
    (*dims)[d] = new_sizes[d];
  }
  const int element_size = DataTypeSize(BaseType(dtype()));
  if (element_size > 0) {
    CHECK_EQ(new_num_elements * static_cast<int64_t>(sizeof(T)),
             NumElements() * element_size);
  } else {
    // DataTypeSize() returns 0 for some data types. In this case, assume that T
    // has the same size as the buffer type.
    // NOTE: If we can be sure that DataTypeSize() does not return 0 for all POD
    // types, then we should check DataTypeToEnum<T>::v() == dtype(). Or simply
    // check if `element_size > 0` to err when bit cast is attempted on Tensor
    // of unknown data type size.
    CHECK_EQ(new_num_elements, NumElements());
  }
}

template <typename T>
typename TTypes<T>::Flat Tensor::flat() {
  // Equivalent to 'return shaped<T, 1>({NumElements()});'
  CheckTypeAndIsAligned(DataTypeToEnum<T>::v());
  Eigen::array<Eigen::DenseIndex, 1> dims;
  dims[0] = NumElements();
  return typename TTypes<T, 1>::Tensor(base<T>(), dims);
}

template <typename T>
typename TTypes<T>::ConstFlat Tensor::flat() const {
  // Equuivalent to 'return shaped<T, 1>({NumElements()});'
  CheckTypeAndIsAligned(DataTypeToEnum<T>::v());
  Eigen::array<Eigen::DenseIndex, 1> dims;
  dims[0] = NumElements();
  return typename TTypes<T, 1>::ConstTensor(base<T>(), dims);
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::shaped(
    gtl::ArraySlice<int64_t> new_sizes) {
  CheckTypeAndIsAligned(DataTypeToEnum<T>::v());
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  FillDimsAndValidateCompatibleShape(new_sizes, &dims);
  return typename TTypes<T, NDIMS>::Tensor(base<T>(), dims);
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::bit_casted_shaped(
    gtl::ArraySlice<int64_t> new_sizes) {
  CHECK(IsAligned());
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  FillDimsAndValidateCompatibleShape<T>(new_sizes, &dims);
  return typename TTypes<T, NDIMS>::Tensor(base<T>(), dims);
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::UnalignedTensor Tensor::unaligned_shaped(
    gtl::ArraySlice<int64_t> new_sizes) {
  CheckType(DataTypeToEnum<T>::v());
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  FillDimsAndValidateCompatibleShape(new_sizes, &dims);
  return typename TTypes<T, NDIMS>::UnalignedTensor(base<T>(), dims);
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::shaped(
    gtl::ArraySlice<int64_t> new_sizes) const {
  CheckType(DataTypeToEnum<T>::v());
  CHECK(IsAligned()) << "ptr = " << base<void>();
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  FillDimsAndValidateCompatibleShape(new_sizes, &dims);
  return typename TTypes<T, NDIMS>::ConstTensor(base<T>(), dims);
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::bit_casted_shaped(
    gtl::ArraySlice<int64_t> new_sizes) const {
  CHECK(IsAligned());
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  FillDimsAndValidateCompatibleShape<T>(new_sizes, &dims);
  return typename TTypes<T, NDIMS>::ConstTensor(base<T>(), dims);
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::UnalignedConstTensor Tensor::unaligned_shaped(
    gtl::ArraySlice<int64_t> new_sizes) const {
  CheckType(DataTypeToEnum<T>::v());
  Eigen::array<Eigen::DenseIndex, NDIMS> dims;
  FillDimsAndValidateCompatibleShape(new_sizes, &dims);
  return typename TTypes<T, NDIMS>::UnalignedConstTensor(base<T>(), dims);
}

template <typename T>
typename TTypes<T>::Scalar Tensor::scalar() {
  static_assert(
      !std::is_same<T, std::string>::value,
      "std::string is no longer a scalar type, use tensorflow::tstring");
  CheckIsAlignedAndSingleElement();
  return typename TTypes<T>::Scalar(base<T>());
}

template <typename T>
typename TTypes<T>::ConstScalar Tensor::scalar() const {
  static_assert(
      !std::is_same<T, std::string>::value,
      "std::string is no longer a scalar type, use tensorflow::tstring");
  CheckIsAlignedAndSingleElement();
  return typename TTypes<T>::ConstScalar(base<T>());
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::flat_inner_dims() {
  return shaped<T, NDIMS>(ComputeFlatInnerDims(shape_.dim_sizes(), NDIMS));
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::flat_outer_dims() {
  return shaped<T, NDIMS>(ComputeFlatOuterDims(shape_.dim_sizes(), NDIMS));
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::Tensor Tensor::flat_inner_outer_dims(int64_t begin) {
  gtl::InlinedVector<int64_t, 4> flat_outer =
      ComputeFlatOuterDims(shape_.dim_sizes(), begin + NDIMS);
  return shaped<T, NDIMS>(ComputeFlatInnerDims(flat_outer, NDIMS));
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::flat_inner_dims() const {
  return shaped<T, NDIMS>(ComputeFlatInnerDims(shape_.dim_sizes(), NDIMS));
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::flat_outer_dims() const {
  return shaped<T, NDIMS>(ComputeFlatOuterDims(shape_.dim_sizes(), NDIMS));
}

template <typename T, size_t NDIMS>
typename TTypes<T, NDIMS>::ConstTensor Tensor::flat_inner_outer_dims(
    int64_t begin) const {
  gtl::InlinedVector<int64_t, 4> flat_outer =
      ComputeFlatOuterDims(shape_.dim_sizes(), begin + NDIMS);
  return shaped<T, NDIMS>(ComputeFlatInnerDims(flat_outer, NDIMS));
}

inline Tensor::Tensor(const Tensor& other)
    : shape_(other.shape()), buf_(other.buf_) {
  if (buf_) buf_->Ref();
}

inline Tensor::Tensor(Tensor&& other)
    : shape_(std::move(other.shape_)), buf_(other.buf_) {
  other.buf_ = nullptr;
}

class Tensor::HostScalarTensorBufferBase : public TensorBuffer {
 public:
  using TensorBuffer::TensorBuffer;
  bool GetAllocatedBytes(size_t* out_bytes) const final;
  void FillAllocationDescription(AllocationDescription* proto) const final;
};

// A packed representation for a single scalar value of type `T`, and a
// `TensorBuffer` implementation that describes (and manages the lifetime of)
// that value.
template <typename T>
struct Tensor::ValueAndTensorBuffer {
  class HostScalarTensorBuffer : public Tensor::HostScalarTensorBufferBase {
   public:
    explicit HostScalarTensorBuffer(void* data)
        : HostScalarTensorBufferBase(data) {}
    size_t size() const final { return sizeof(T); }
    TensorBuffer* root_buffer() final { return this; }

    // Override `operator delete` so that calling `delete this` in
    // `core::Refcounted::Unref()` for an object of this type will free
    // the enclosing `ValueAndTensorBuffer` for the tensor buffer.
    //
    // NOTE(mrry): The definition of this method must be outside the class
    // definition in order to satisfy some compilers.
    static void operator delete(void* ptr);

    static void operator delete(void*, void*) {
      // Some compilers require an overridden class-specific deallocation
      // function, which will be called if placement `new` throws an
      // exception.
    }

   private:
    ~HostScalarTensorBuffer() override { static_cast<T*>(data())->~T(); }
  };

  T value;
  HostScalarTensorBuffer tensor_buffer;
};

/* static */
template <typename T>
void Tensor::ValueAndTensorBuffer<T>::HostScalarTensorBuffer::operator delete(
    void* ptr) {
  // Use a dummy object to compute to offset of
  // `ValueAndTensorBuffer::tensor_buffer`, because `offsetof()` is not
  // necessarily defined on this non-POD type (until C++17).
  //
  // NOTE(mrry): Using `sizeof(Tensor::ValueAndTensorBuffer<T>)` here requires
  // us to define this method outside the class definition, so that it is not
  // considered an incomplete type.
  typename std::aligned_storage<sizeof(Tensor::ValueAndTensorBuffer<T>),
                                alignof(Tensor::ValueAndTensorBuffer<T>)>::type
      dummy_storage_;
  Tensor::ValueAndTensorBuffer<T>* dummy_object =
      reinterpret_cast<Tensor::ValueAndTensorBuffer<T>*>(&dummy_storage_);
  intptr_t offset = reinterpret_cast<intptr_t>(&dummy_object->tensor_buffer) -
                    reinterpret_cast<intptr_t>(dummy_object);

  port::AlignedFree(static_cast<char*>(ptr) - offset);
}

template <typename T>
Tensor::Tensor(T value, host_scalar_tag tag) {
  auto* value_and_buf = static_cast<Tensor::ValueAndTensorBuffer<T>*>(
      port::AlignedMalloc(sizeof(typename Tensor::ValueAndTensorBuffer<T>),
                          EIGEN_MAX_ALIGN_BYTES));
  new (&value_and_buf->value) T(std::move(value));
  new (&value_and_buf->tensor_buffer)
      typename Tensor::ValueAndTensorBuffer<T>::HostScalarTensorBuffer(
          value_and_buf);
  buf_ = &value_and_buf->tensor_buffer;
  set_dtype(DataTypeToEnum<T>::value);
}

inline Tensor& Tensor::operator=(Tensor&& other) {
  // Avoid self-assignment, since we might destroy our underlying buffer.
  if (&other != this) {
    shape_ = std::move(other.shape_);
    if (buf_) buf_->Unref();
    buf_ = other.buf_;
    other.buf_ = nullptr;
  }
  return *this;
}

// END_SKIP_DOXYGEN

}  // namespace tensorflow

#endif  // TENSORFLOW_CORE_FRAMEWORK_TENSOR_H_

1 .3 TF 模型

TensorFlow Lite 模型以名为 FlatBuffer 的专用高效可移植格式(由“.tflite”文件扩展名标识)表示。与 TensorFlow 的协议缓冲区模型格式相比,这种格式具有多种优势,例如可缩减大小(代码占用的空间较小)以及提高推断速度(可直接访问数据,无需执行额外的解析/解压缩步骤),这样一来,TensorFlow Lite 即可在计算和内存资源有限的设备上高效地运行。

TensorFlow Lite 模型可以选择包含元数据,并在元数据中添加人类可读的模型说明和机器可读的数据,以便在设备推断过程中自动生成处理前和处理后流水线。如需了解详情,请参阅添加元数据。

您可以通过以下方式生成 TensorFlow Lite 模型:

使用现有的 TensorFlow Lite 模型:若要选择现有模型,请参阅 TensorFlow Lite 示例。模型可能包含元数据,也可能不含元数据。

创建 TensorFlow Lite 模型:使用 TensorFlow Lite Model Maker,利用您自己的自定义数据集创建模型。默认情况下,所有模型都包含元数据。

将 TensorFlow 模型转换为 TensorFlow Lite 模型:使用 TensorFlow Lite Converter 将 TensorFlow 模型转换为 TensorFlow Lite 模型。在转换过程中,您可以应用量化等优化措施,以缩减模型大小和缩短延时,并最大限度降低或完全避免准确率损失。默认情况下,所有模型都不含元数据。

2 流水线

下面将分析 改算子中的核心部分process 方法

2.1 TfLiteConverterCalculator

根据输入的tag区分调用 Cpu处理还是Gpu处理 最终将输入的数据转换成标准的TfliteTensor

absl::Status TfLiteConverterCalculator::Process(CalculatorContext* cc) {
  if (use_gpu_) {
    if (cc->Inputs().Tag(kGpuBufferTag).IsEmpty()) {
      return absl::OkStatus();
    }
    if (!initialized_) {
      MP_RETURN_IF_ERROR(InitGpu(cc));
      initialized_ = true;
    }
    // Convert to GPU tensors type.
    MP_RETURN_IF_ERROR(ProcessGPU(cc));
  } else {
    // Convert to CPU tensors or Matrix type.
    MP_RETURN_IF_ERROR(ProcessCPU(cc));
  }
  return absl::OkStatus();
}
2.1.1 GPU

ProcessGPU 处理前 初始了GPU上下文 shader program

MSL/GLSL语言
intGpu

区域内2d纹理 rgba 通道采样器 vec4


#if MEDIAPIPE_TFLITE_GPU_SUPPORTED
  // Get input image sizes.
  const auto& input =
      cc->Inputs().Tag(kGpuBufferTag).Get<mediapipe::GpuBuffer>();
  mediapipe::ImageFormat::Format format =
      mediapipe::ImageFormatForGpuBufferFormat(input.format());
  gpu_data_out_ = absl::make_unique<GPUData>();
  gpu_data_out_->elements = input.height() * input.width() * max_num_channels_;
  const bool include_alpha = (max_num_channels_ == 4);
  const bool single_channel = (max_num_channels_ == 1);
  if (!(format == mediapipe::ImageFormat::GRAY8 ||
        format == mediapipe::ImageFormat::SRGB ||
        format == mediapipe::ImageFormat::SRGBA))
    RET_CHECK_FAIL() << "Unsupported GPU input format.";
  if (include_alpha && (format != mediapipe::ImageFormat::SRGBA))
    RET_CHECK_FAIL() << "Num input channels is less than desired output.";
#endif  // MEDIAPIPE_TFLITE_GPU_SUPPORTED

#if MEDIAPIPE_TFLITE_GL_INFERENCE
  MP_RETURN_IF_ERROR(gpu_helper_.RunInGlContext(
      [this, &include_alpha, &input, &single_channel]() -> absl::Status {
        // Device memory.
        MP_RETURN_IF_ERROR(
            ::tflite::gpu::gl::CreateReadWriteShaderStorageBuffer<float>(
                gpu_data_out_->elements, &gpu_data_out_->buffer));

        // Shader to convert GL Texture to Shader Storage Buffer Object (SSBO),
        // with normalization to either: [0,1] or [-1,1].
        const std::string shader_source = absl::Substitute(
            R"( #version 310 es
          layout(local_size_x = $0, local_size_y = $0) in;
          layout(binding = 0) uniform sampler2D input_texture;
          layout(std430, binding = 1) buffer Output {float elements[];} output_data;
          ivec2 width_height = ivec2($1, $2);
          void main() {
            ivec2 gid = ivec2(gl_GlobalInvocationID.xy);
            if (gid.x >= width_height.x || gid.y >= width_height.y) return;
            vec4 pixel = texelFetch(input_texture, gid, 0);
            $3  // normalize [-1,1]
            int linear_index = $7 * ($4 * width_height.x + gid.x);
            output_data.elements[linear_index + 0] = pixel.x;  // r channel
            $5  // g & b channels
            $6  // alpha channel
          })",
            /*$0=*/kWorkgroupSize, /*$1=*/input.width(), /*$2=*/input.height(),
            /*$3=*/
            output_range_.has_value()
                ? absl::Substitute(
                      "pixel = pixel * float($0) + float($1);",
                      (output_range_->second - output_range_->first),
                      output_range_->first)
                : "",
            /*$4=*/flip_vertically_ ? "(width_height.y - 1 - gid.y)" : "gid.y",
            /*$5=*/
            single_channel
                ? ""
                : R"(output_data.elements[linear_index + 1] = pixel.y;
                            output_data.elements[linear_index + 2] = pixel.z;)",
            /*$6=*/
            include_alpha ? "output_data.elements[linear_index + 3] = pixel.w;"
                          : "",
            /*$7=*/max_num_channels_);
        MP_RETURN_IF_ERROR(GlShader::CompileShader(
            GL_COMPUTE_SHADER, shader_source, &gpu_data_out_->shader));
        MP_RETURN_IF_ERROR(GlProgram::CreateWithShader(
            gpu_data_out_->shader, &gpu_data_out_->program));
        return absl::OkStatus();
      }));

#elif MEDIAPIPE_TFLITE_METAL_INFERENCE

  RET_CHECK(include_alpha)
      << "iOS GPU inference currently accepts only RGBA input.";

  // Device memory.
  id<MTLDevice> device = gpu_helper_.mtlDevice;
  gpu_data_out_->buffer =
      [device newBufferWithLength:gpu_data_out_->elements * sizeof(float)
                          options:MTLResourceStorageModeShared];

  // Shader to convert GL Texture to Metal Buffer,
  // with normalization to either: [0,1] or [-1,1].
  const std::string shader_source = absl::Substitute(
      R"(
  #include <metal_stdlib>

  using namespace metal;

  kernel void convertKernel(
      texture2d<half, access::sample> in_tex  [[ texture(0) ]],
      device float*                   out_buf [[ buffer(1) ]],
      uint2                           gid     [[ thread_position_in_grid ]]) {
    if (gid.x >= in_tex.get_width() || gid.y >= in_tex.get_height()) return;
    constexpr sampler texture_sampler(coord::pixel, address::clamp_to_edge);
    const float2 coord = float2(gid.x, gid.y);
    $0 pixel = $0(in_tex.sample(texture_sampler, coord).$1);
    $2   // normalize [-1,1]
    const int linear_index = $4 * ($3 * in_tex.get_width() + gid.x);
    out_buf[linear_index + 0] = pixel.x;
    out_buf[linear_index + 1] = pixel.y;
    out_buf[linear_index + 2] = pixel.z;
    $5  // alpha channel
  }
      )",
      /*$0=*/include_alpha ? "float4" : "float3",
      /*$1=*/include_alpha ? "rgba" : "rgb",
      /*$2=*/
      output_range_.has_value()
          ? absl::Substitute("pixel = pixel * float($0) + float($1);",
                             (output_range_->second - output_range_->first),
                             output_range_->first)
          : "",
      /*$3=*/flip_vertically_ ? "(in_tex.get_height() - 1 - gid.y)" : "gid.y",
      /*$4=*/include_alpha ? 4 : 3,
      /*$5=*/include_alpha ? "out_buf[linear_index + 3] = pixel.w;" : "");

  NSString* library_source =
      [NSString stringWithUTF8String:shader_source.c_str()];
  NSError* error = nil;
  id<MTLLibrary> library =
      [device newLibraryWithSource:library_source options:nullptr error:&error];
  RET_CHECK(library != nil) << "Couldn't create shader library "
                            << [[error localizedDescription] UTF8String];
  id<MTLFunction> kernel_func = nil;
  kernel_func = [library newFunctionWithName:@"convertKernel"];
  RET_CHECK(kernel_func != nil) << "Couldn't create kernel function.";
  gpu_data_out_->pipeline_state =
      [device newComputePipelineStateWithFunction:kernel_func error:&error];
  RET_CHECK(gpu_data_out_->pipeline_state != nil)
      << "Couldn't create pipeline state "
      << [[error localizedDescription] UTF8String];
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE

  return absl::OkStatus();
}

CreateSourceTexture

also Convert ImageFrame to GpuBuffer.

在MediaPipe中,GpuBuffer是一个用于存储GPU内存中的数据的类。GpuBuffer可以被用于存储图像、视频帧、音频等数据。

GpuBuffer提供了一个跨平台的GPU内存抽象,允许数据在不同的GPU和平台之间进行传输和操作。

GpuBuffer(GpuBuffer&& other) = default;

auto src = gpu_helper_.CreateSourceTexture(input);

GlTexture GlCalculatorHelperImpl::MapGpuBuffer(const GpuBuffer& gpu_buffer,
                                               GlTextureView view) {
  if (gpu_buffer.format() != GpuBufferFormat::kUnknown) {
    // TODO: do the params need to be reset here??
    glBindTexture(view.target(), view.name());
    GlTextureInfo info = GlTextureInfoForGpuBufferFormat(
        gpu_buffer.format(), view.plane(), GetGlVersion());
    gl_context_->SetStandardTextureParams(view.target(),
                                          info.gl_internal_format);
    glBindTexture(view.target(), 0);
  }

这段代码是 GlTextureCalculatorHelperImpl 类的 MapGpuBuffer 函数,它的作用是将 GpuBuffer 映射到 GlTexture。代码中首先判断了 GpuBuffer 的格式是否为 GpuBufferFormat::kUnknown,如果不是,那么就需要根据 GpuBuffer 的格式、视图对象和 OpenGL 版本生成 GlTextureInfo 对象,然后使用该对象设置标准纹理参数。

具体来说,代码中首先通过 glBindTexture 函数将纹理对象绑定到指定的纹理单元,然后使用 GlTextureInfoForGpuBufferFormat 函数生成 GlTextureInfo 对象。接着,使用 gl_context_->SetStandardTextureParams 函数设置标准纹理参数。最后再次使用 glBindTexture 函数将纹理对象解绑。

ProcessGPU

absl::Status TfLiteConverterCalculator::ProcessGPU(CalculatorContext* cc) {
#if MEDIAPIPE_TFLITE_GL_INFERENCE
  // GpuBuffer to tflite::gpu::GlBuffer conversion.
  const auto& input =
      cc->Inputs().Tag(kGpuBufferTag).Get<mediapipe::GpuBuffer>();
  MP_RETURN_IF_ERROR(
      gpu_helper_.RunInGlContext([this, &input]() -> absl::Status {
        // Convert GL texture into TfLite GlBuffer (SSBO).
        auto src = gpu_helper_.CreateSourceTexture(input);
        glActiveTexture(GL_TEXTURE0 + 0);
        glBindTexture(GL_TEXTURE_2D, src.name());
        MP_RETURN_IF_ERROR(gpu_data_out_->buffer.BindToIndex(1));
        const tflite::gpu::uint3 workgroups = {
            NumGroups(input.width(), kWorkgroupSize),
            NumGroups(input.height(), kWorkgroupSize), 1};
        MP_RETURN_IF_ERROR(gpu_data_out_->program.Dispatch(workgroups));
        glBindBuffer(GL_SHADER_STORAGE_BUFFER, 0);
        glBindTexture(GL_TEXTURE_2D, 0);
        src.Release();
        return absl::OkStatus();
      }));

  // Copy into outputs.
  auto output_tensors = absl::make_unique<std::vector<GpuTensor>>();
  MP_RETURN_IF_ERROR(
      gpu_helper_.RunInGlContext([this, &output_tensors]() -> absl::Status {
        output_tensors->resize(1);
        {
          GpuTensor& tensor = output_tensors->at(0);
          MP_RETURN_IF_ERROR(CreateReadWriteShaderStorageBuffer<float>(
              gpu_data_out_->elements, &tensor));
          MP_RETURN_IF_ERROR(CopyBuffer(gpu_data_out_->buffer, tensor));
        }
        return absl::OkStatus();
      }));
  cc->Outputs()
      .Tag(kTensorsGpuTag)
      .Add(output_tensors.release(), cc->InputTimestamp());
#elif MEDIAPIPE_TFLITE_METAL_INFERENCE
  // GpuBuffer to id<MTLBuffer> conversion.
  const auto& input =
      cc->Inputs().Tag(kGpuBufferTag).Get<mediapipe::GpuBuffer>();
  id<MTLCommandBuffer> command_buffer = [gpu_helper_ commandBuffer];

  id<MTLTexture> src_texture = [gpu_helper_ metalTextureWithGpuBuffer:input];
  command_buffer.label = @"TfLiteConverterCalculatorConvertAndBlit";
  id<MTLComputeCommandEncoder> compute_encoder =
      [command_buffer computeCommandEncoder];
  [compute_encoder setComputePipelineState:gpu_data_out_->pipeline_state];
  [compute_encoder setTexture:src_texture atIndex:0];
  [compute_encoder setBuffer:gpu_data_out_->buffer offset:0 atIndex:1];
  MTLSize threads_per_group = MTLSizeMake(kWorkgroupSize, kWorkgroupSize, 1);
  MTLSize threadgroups =
      MTLSizeMake(NumGroups(input.width(), kWorkgroupSize),
                  NumGroups(input.height(), kWorkgroupSize), 1);
  [compute_encoder dispatchThreadgroups:threadgroups
                  threadsPerThreadgroup:threads_per_group];
  [compute_encoder endEncoding];

  // Copy into outputs.
  // TODO Avoid this copy.
  auto output_tensors = absl::make_unique<std::vector<GpuTensor>>();
  output_tensors->resize(1);
  id<MTLDevice> device = gpu_helper_.mtlDevice;
  output_tensors->at(0) =
      [device newBufferWithLength:gpu_data_out_->elements * sizeof(float)
                          options:MTLResourceStorageModeShared];
  [MPPMetalUtil blitMetalBufferTo:output_tensors->at(0)
                             from:gpu_data_out_->buffer
                         blocking:false
                    commandBuffer:command_buffer];

  cc->Outputs()
      .Tag(kTensorsGpuTag)
      .Add(output_tensors.release(), cc->InputTimestamp());
#else
  RET_CHECK_FAIL() << "GPU processing is not enabled.";
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE

  return absl::OkStatus();
}

absl::Status TfLiteConverterCalculator::InitGpu(CalculatorContext* cc) {
#if MEDIAPIPE_TFLITE_GPU_SUPPORTED
  // Get input image sizes.
  const auto& input =
      cc->Inputs().Tag(kGpuBufferTag).Get<mediapipe::GpuBuffer>();
  mediapipe::ImageFormat::Format format =
      mediapipe::ImageFormatForGpuBufferFormat(input.format());
  gpu_data_out_ = absl::make_unique<GPUData>();
  gpu_data_out_->elements = input.height() * input.width() * max_num_channels_;
  const bool include_alpha = (max_num_channels_ == 4);
  const bool single_channel = (max_num_channels_ == 1);
  if (!(format == mediapipe::ImageFormat::GRAY8 ||
        format == mediapipe::ImageFormat::SRGB ||
        format == mediapipe::ImageFormat::SRGBA))
    RET_CHECK_FAIL() << "Unsupported GPU input format.";
  if (include_alpha && (format != mediapipe::ImageFormat::SRGBA))
    RET_CHECK_FAIL() << "Num input channels is less than desired output.";
#endif  // MEDIAPIPE_TFLITE_GPU_SUPPORTED

#if MEDIAPIPE_TFLITE_GL_INFERENCE
  MP_RETURN_IF_ERROR(gpu_helper_.RunInGlContext(
      [this, &include_alpha, &input, &single_channel]() -> absl::Status {
        // Device memory.
        MP_RETURN_IF_ERROR(
            ::tflite::gpu::gl::CreateReadWriteShaderStorageBuffer<float>(
                gpu_data_out_->elements, &gpu_data_out_->buffer));

        // Shader to convert GL Texture to Shader Storage Buffer Object (SSBO),
        // with normalization to either: [0,1] or [-1,1].
        const std::string shader_source = absl::Substitute(
            R"( #version 310 es
          layout(local_size_x = $0, local_size_y = $0) in;
          layout(binding = 0) uniform sampler2D input_texture;
          layout(std430, binding = 1) buffer Output {float elements[];} output_data;
          ivec2 width_height = ivec2($1, $2);
          void main() {
            ivec2 gid = ivec2(gl_GlobalInvocationID.xy);
            if (gid.x >= width_height.x || gid.y >= width_height.y) return;
            vec4 pixel = texelFetch(input_texture, gid, 0);
            $3  // normalize [-1,1]
            int linear_index = $7 * ($4 * width_height.x + gid.x);
            output_data.elements[linear_index + 0] = pixel.x;  // r channel
            $5  // g & b channels
            $6  // alpha channel
          })",
            /*$0=*/kWorkgroupSize, /*$1=*/input.width(), /*$2=*/input.height(),
            /*$3=*/
            output_range_.has_value()
                ? absl::Substitute(
                      "pixel = pixel * float($0) + float($1);",
                      (output_range_->second - output_range_->first),
                      output_range_->first)
                : "",
            /*$4=*/flip_vertically_ ? "(width_height.y - 1 - gid.y)" : "gid.y",
            /*$5=*/
            single_channel
                ? ""
                : R"(output_data.elements[linear_index + 1] = pixel.y;
                            output_data.elements[linear_index + 2] = pixel.z;)",
            /*$6=*/
            include_alpha ? "output_data.elements[linear_index + 3] = pixel.w;"
                          : "",
            /*$7=*/max_num_channels_);
        MP_RETURN_IF_ERROR(GlShader::CompileShader(
            GL_COMPUTE_SHADER, shader_source, &gpu_data_out_->shader));
        MP_RETURN_IF_ERROR(GlProgram::CreateWithShader(
            gpu_data_out_->shader, &gpu_data_out_->program));
        return absl::OkStatus();
      }));

#elif MEDIAPIPE_TFLITE_METAL_INFERENCE

  RET_CHECK(include_alpha)
      << "iOS GPU inference currently accepts only RGBA input.";

  // Device memory.
  id<MTLDevice> device = gpu_helper_.mtlDevice;
  gpu_data_out_->buffer =
      [device newBufferWithLength:gpu_data_out_->elements * sizeof(float)
                          options:MTLResourceStorageModeShared];

  // Shader to convert GL Texture to Metal Buffer,
  // with normalization to either: [0,1] or [-1,1].
  const std::string shader_source = absl::Substitute(
      R"(
  #include <metal_stdlib>

  using namespace metal;

  kernel void convertKernel(
      texture2d<half, access::sample> in_tex  [[ texture(0) ]],
      device float*                   out_buf [[ buffer(1) ]],
      uint2                           gid     [[ thread_position_in_grid ]]) {
    if (gid.x >= in_tex.get_width() || gid.y >= in_tex.get_height()) return;
    constexpr sampler texture_sampler(coord::pixel, address::clamp_to_edge);
    const float2 coord = float2(gid.x, gid.y);
    $0 pixel = $0(in_tex.sample(texture_sampler, coord).$1);
    $2   // normalize [-1,1]
    const int linear_index = $4 * ($3 * in_tex.get_width() + gid.x);
    out_buf[linear_index + 0] = pixel.x;
    out_buf[linear_index + 1] = pixel.y;
    out_buf[linear_index + 2] = pixel.z;
    $5  // alpha channel
  }
      )",
      /*$0=*/include_alpha ? "float4" : "float3",
      /*$1=*/include_alpha ? "rgba" : "rgb",
      /*$2=*/
      output_range_.has_value()
          ? absl::Substitute("pixel = pixel * float($0) + float($1);",
                             (output_range_->second - output_range_->first),
                             output_range_->first)
          : "",
      /*$3=*/flip_vertically_ ? "(in_tex.get_height() - 1 - gid.y)" : "gid.y",
      /*$4=*/include_alpha ? 4 : 3,
      /*$5=*/include_alpha ? "out_buf[linear_index + 3] = pixel.w;" : "");

  NSString* library_source =
      [NSString stringWithUTF8String:shader_source.c_str()];
  NSError* error = nil;
  id<MTLLibrary> library =
      [device newLibraryWithSource:library_source options:nullptr error:&error];
  RET_CHECK(library != nil) << "Couldn't create shader library "
                            << [[error localizedDescription] UTF8String];
  id<MTLFunction> kernel_func = nil;
  kernel_func = [library newFunctionWithName:@"convertKernel"];
  RET_CHECK(kernel_func != nil) << "Couldn't create kernel function.";
  gpu_data_out_->pipeline_state =
      [device newComputePipelineStateWithFunction:kernel_func error:&error];
  RET_CHECK(gpu_data_out_->pipeline_state != nil)
      << "Couldn't create pipeline state "
      << [[error localizedDescription] UTF8String];
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE

  return absl::OkStatus();
}

#2.1.2 CPU

2.2 模型推理 TENSORS_GPU TENSORS_CPU inferemce

假设张量被正确排序(按顺序添加到模型中)。
输入张量被假定为正确的大小并且已经归一化。
当图形关闭时,所有输出的tflitetenors将被销毁;
(即调用graph.WaitUntilDone()之后)。
GPU张量支持需要OpenGL ES 3.1+。
这个计算器默认使用FixedSizeInputStreamHandler。

2.2.1 TfLiteInferenceCalculator

下文将忽略所有cpu部分

推理支持GPU加速及其它硬件OPENCL加速 通过加载不同的 tflite_gpu_runner_
TENSOR_CPU 数据可以通过各种平台的优化指令集等加速

absl::Status TfLiteInferenceCalculator::InitTFLiteGPURunner(
    CalculatorContext* cc) {
#if MEDIAPIPE_TFLITE_GL_INFERENCE
  ASSIGN_OR_RETURN(model_packet_, GetModelAsPacket(*cc));
  const auto& model = *model_packet_.Get<TfLiteModelPtr>();

  tflite::ops::builtin::BuiltinOpResolverWithoutDefaultDelegates
      default_op_resolver;
  auto op_resolver_ptr =
      static_cast<const tflite::ops::builtin::BuiltinOpResolver*>(
          &default_op_resolver);
  if (cc->InputSidePackets().HasTag(kCustomOpResolverTag)) {
    op_resolver_ptr = &(cc->InputSidePackets()
                            .Tag(kCustomOpResolverTag)
                            .Get<tflite::ops::builtin::BuiltinOpResolver>());
  }

  // Create runner
  tflite::gpu::InferenceOptions options;
  options.priority1 = allow_precision_loss_
                          ? tflite::gpu::InferencePriority::MIN_LATENCY
                          : tflite::gpu::InferencePriority::MAX_PRECISION;
  options.priority2 = tflite::gpu::InferencePriority::AUTO;
  options.priority3 = tflite::gpu::InferencePriority::AUTO;
  switch (tflite_gpu_runner_usage_) {
    case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::
        FAST_SINGLE_ANSWER: {
      options.usage = tflite::gpu::InferenceUsage::FAST_SINGLE_ANSWER;
      break;
    }
    case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::
        SUSTAINED_SPEED: {
      options.usage = tflite::gpu::InferenceUsage::SUSTAINED_SPEED;
      break;
    }
    case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::
        UNSPECIFIED: {
      return absl::InternalError("inference usage need to be specified.");
    }
  }

  tflite_gpu_runner_ = std::make_unique<tflite::gpu::TFLiteGPURunner>(options);
  switch (tflite_gpu_runner_api_) {
    case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::OPENGL: {
      tflite_gpu_runner_->ForceOpenGL();
      break;
    }
    case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::OPENCL: {
      tflite_gpu_runner_->ForceOpenCL();
      break;
    }
    case mediapipe::TfLiteInferenceCalculatorOptions::Delegate::Gpu::ANY: {
      // Do not need to force any specific API.
      break;
    }
  }

ProcessGpu

interpreter_->Invoke()

absl::Status TfLiteInferenceCalculator::Process(CalculatorContext* cc) {
  return RunInContextIfNeeded([this, cc]() -> absl::Status {
    // 0. Declare outputs
    auto output_tensors_gpu = absl::make_unique<std::vector<GpuTensor>>();
    auto output_tensors_cpu = absl::make_unique<std::vector<TfLiteTensor>>();

    // 1. Receive pre-processed tensor inputs.
    if (gpu_input_) {
      MP_RETURN_IF_ERROR(ProcessInputsGpu(cc, output_tensors_gpu.get()));
    } else {
      MP_RETURN_IF_ERROR(ProcessInputsCpu(cc, output_tensors_cpu.get()));
    }

    // 2. Run inference.
#if MEDIAPIPE_TFLITE_GL_INFERENCE
    if (gpu_inference_ && use_advanced_gpu_api_) {
      RET_CHECK(tflite_gpu_runner_->Invoke().ok());
    } else {
      RET_CHECK_EQ(interpreter_->Invoke(), kTfLiteOk);
    }
#elif MEDIAPIPE_TFLITE_METAL_INFERENCE
    // Metal delegate supports external command buffer only if all input and
    // output buffers are on GPU.
    if (gpu_inference_ && gpu_input_ && gpu_output_) {
      id<MTLCommandBuffer> command_buffer = [gpu_helper_ commandBuffer];
      command_buffer.label = @"TfLiteInferenceCalculator";
      RET_CHECK(
          TFLGpuDelegateSetCommandBuffer(delegate_.get(), command_buffer));
      RET_CHECK_EQ(interpreter_->Invoke(), kTfLiteOk);
      [command_buffer commit];
    } else {
      RET_CHECK_EQ(interpreter_->Invoke(), kTfLiteOk);
    }
#else   // MEDIAPIPE_TFLITE_GL_INFERENCE
    RET_CHECK_EQ(interpreter_->Invoke(), kTfLiteOk);
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE

    // 3. Output processed tensors.
    if (gpu_output_ || use_advanced_gpu_api_) {
      MP_RETURN_IF_ERROR(ProcessOutputsGpu(cc, std::move(output_tensors_cpu),
                                           std::move(output_tensors_gpu)));
    } else {
      MP_RETURN_IF_ERROR(ProcessOutputsCpu(cc, std::move(output_tensors_cpu)));
    }

    return absl::OkStatus();
  });
}

ProcessInputsGpu

 absl::Status TfLiteInferenceCalculator::ProcessInputsGpu(
    CalculatorContext* cc, std::vector<GpuTensor>* output_tensors_gpu) {
  if (cc->Inputs().Tag(kTensorsGpuTag).IsEmpty()) {
    return absl::OkStatus();
  }
  if (use_advanced_gpu_api_) {
#if MEDIAPIPE_TFLITE_GL_INFERENCE
    const auto& input_tensors =
        cc->Inputs().Tag(kTensorsGpuTag).Get<std::vector<GpuTensor>>();
    RET_CHECK(!input_tensors.empty());
    for (int i = 0; i < input_tensors.size(); ++i) {
      MP_RETURN_IF_ERROR(
          tflite_gpu_runner_->BindSSBOToInputTensor(input_tensors[i].id(), i));
    }
    if (gpu_output_) {
      // Allocate new output tensor.
      output_tensors_gpu->resize(gpu_data_out_.size());
      for (int i = 0; i < gpu_data_out_.size(); ++i) {
        GpuTensor& tensor = output_tensors_gpu->at(i);
        MP_RETURN_IF_ERROR(CreateReadWriteShaderStorageBuffer<float>(
            gpu_data_out_[i]->elements, &tensor));
        MP_RETURN_IF_ERROR(
            tflite_gpu_runner_->BindSSBOToOutputTensor(tensor.id(), i));
      }
    } else {
      // Re-use internal output tensor.
      for (int i = 0; i < gpu_data_out_.size(); ++i) {
        MP_RETURN_IF_ERROR(tflite_gpu_runner_->BindSSBOToOutputTensor(
            gpu_data_out_[i]->buffer.id(), i));
      }
    }
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE
  } else if (gpu_input_) {
    // Read GPU input into SSBO.
#if MEDIAPIPE_TFLITE_GL_INFERENCE
    const auto& input_tensors =
        cc->Inputs().Tag(kTensorsGpuTag).Get<std::vector<GpuTensor>>();
    RET_CHECK_GT(input_tensors.size(), 0);
    // Explicit copy input.
    gpu_data_in_.resize(input_tensors.size());
    for (int i = 0; i < input_tensors.size(); ++i) {
      MP_RETURN_IF_ERROR(CopyBuffer(input_tensors[i], gpu_data_in_[i]->buffer));
    }
#elif MEDIAPIPE_TFLITE_METAL_INFERENCE
    const auto& input_tensors =
        cc->Inputs().Tag(kTensorsGpuTag).Get<std::vector<GpuTensor>>();
    RET_CHECK_GT(input_tensors.size(), 0);
    // Explicit copy input with conversion float 32 bits to 16 bits.
    gpu_data_in_.resize(input_tensors.size());
    id<MTLCommandBuffer> command_buffer = [gpu_helper_ commandBuffer];
    command_buffer.label = @"TfLiteInferenceCalculatorConvert";
    id<MTLComputeCommandEncoder> compute_encoder =
        [command_buffer computeCommandEncoder];
    [compute_encoder setComputePipelineState:fp32_to_fp16_program_];
    for (int i = 0; i < input_tensors.size(); ++i) {
      [compute_encoder setBuffer:input_tensors[i] offset:0 atIndex:0];
      [compute_encoder setBuffer:gpu_data_in_[i]->buffer offset:0 atIndex:1];
      constexpr int kWorkgroupSize = 64;  // Block size for GPU shader.
      MTLSize threads_per_group = MTLSizeMake(kWorkgroupSize, 1, 1);
      const int threadgroups =
          NumGroups(gpu_data_in_[i]->elements, kWorkgroupSize);
      [compute_encoder dispatchThreadgroups:MTLSizeMake(threadgroups, 1, 1)
                      threadsPerThreadgroup:threads_per_group];
    }
    [compute_encoder endEncoding];
    [command_buffer commit];
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE
  }

  return absl::OkStatus();
}

ProcessOutputsGpu

absl::Status TfLiteInferenceCalculator::ProcessOutputsGpu(
    CalculatorContext* cc,
    std::unique_ptr<std::vector<TfLiteTensor>> output_tensors_cpu,
    std::unique_ptr<std::vector<GpuTensor>> output_tensors_gpu) {
  if (use_advanced_gpu_api_) {
#if MEDIAPIPE_TFLITE_GL_INFERENCE
    if (gpu_output_) {
      // Send out pre-allocated tensors.
      cc->Outputs()
          .Tag(kTensorsGpuTag)
          .Add(output_tensors_gpu.release(), cc->InputTimestamp());
    } else {
      // Download to CPU for output.
      const auto& tensor_indexes = interpreter_->inputs();
      for (int i = 0; i < tensor_indexes.size(); ++i) {
        TfLiteTensor* tensor = interpreter_->tensor(tensor_indexes[i]);
        std::vector<float> gpu_data(tensor->bytes / sizeof(float));
        MP_RETURN_IF_ERROR(gpu_data_out_[i]->buffer.Read(
            absl::MakeSpan(tensor->data.f, tensor->bytes)));
        output_tensors_cpu->emplace_back(*tensor);
      }
      // Output result tensors (CPU).
      cc->Outputs()
          .Tag(kTensorsTag)
          .Add(output_tensors_cpu.release(), cc->InputTimestamp());
    }
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE
  } else if (gpu_output_) {
#if MEDIAPIPE_TFLITE_GL_INFERENCE
    // Output result tensors (GPU).
    output_tensors_gpu->resize(gpu_data_out_.size());
    for (int i = 0; i < gpu_data_out_.size(); ++i) {
      GpuTensor& tensor = output_tensors_gpu->at(i);
      // Allocate output tensor.
      MP_RETURN_IF_ERROR(CreateReadWriteShaderStorageBuffer<float>(
          gpu_data_out_[i]->elements, &tensor));
      MP_RETURN_IF_ERROR(CopyBuffer(gpu_data_out_[i]->buffer, tensor));
    }
    cc->Outputs()
        .Tag(kTensorsGpuTag)
        .Add(output_tensors_gpu.release(), cc->InputTimestamp());
#elif MEDIAPIPE_TFLITE_METAL_INFERENCE
    // Output result tensors (GPU).
    output_tensors_gpu->resize(gpu_data_out_.size());
    id<MTLDevice> device = gpu_helper_.mtlDevice;
    id<MTLCommandBuffer> command_buffer = [gpu_helper_ commandBuffer];
    command_buffer.label = @"TfLiteInferenceBPHWC4Convert";
    id<MTLComputeCommandEncoder> convert_command =
        [command_buffer computeCommandEncoder];
    for (int i = 0; i < gpu_data_out_.size(); ++i) {
      // Allocate output tensor.
      output_tensors_gpu->at(i) =
          [device newBufferWithLength:gpu_data_out_[i]->elements * sizeof(float)
                              options:MTLResourceStorageModeShared];
      // Reshape tensor.
      [converter_from_BPHWC4_ convertWithEncoder:convert_command
                                           shape:gpu_data_out_[i]->shape
                                    sourceBuffer:gpu_data_out_[i]->buffer
                                 convertedBuffer:output_tensors_gpu->at(i)];
    }
    [convert_command endEncoding];
    [command_buffer commit];
    cc->Outputs()
        .Tag(kTensorsGpuTag)
        .Add(output_tensors_gpu.release(), cc->InputTimestamp());
#endif  // MEDIAPIPE_TFLITE_GL_INFERENCE
  }

invoke

tflite_gpu_runner.cc

absl::Status TFLiteGPURunner::Invoke() { return runner_->Run(); }

build TFLiteGPURunner

跳过opencl_runner

absl::Status TFLiteGPURunner::InitializeOpenGL(
    std::unique_ptr<InferenceBuilder>* builder) {
  gl::InferenceEnvironmentOptions env_options;
  gl::InferenceEnvironmentProperties properties;
  gl::InferenceOptions gl_options;
  gl_options.priority1 = options_.priority1;
  gl_options.priority2 = options_.priority2;
  gl_options.priority3 = options_.priority3;
  gl_options.usage = options_.usage;
  MP_RETURN_IF_ERROR(
      NewInferenceEnvironment(env_options, &gl_environment_, &properties));
  MP_RETURN_IF_ERROR(gl_environment_->NewInferenceBuilder(std::move(*graph_gl_),
                                                          gl_options, builder));
  return absl::OkStatus();
}

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1177273.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Ansible入门使用

目录 前言 一、环境概况 修改主机名&#xff08;可选项&#xff09; 二、安装部署 1.安装epel扩展源 2.安装Ansible 3.修改Ansible的hosts文件 4.生成密钥 三、Ansible模块使用介绍 Command模块 Shell模块 User模块 Copy模块 File模块 Hostname模块 Yum模块 Se…

windows下mysql5.7.19批处理安装启动修改密码

有时候要在windows下安装mysql&#xff0c;新手一般觉得复杂&#xff0c;要搞命令行。 于是弄了些批处理&#xff0c;win7 64 测试通过&#xff0c; windows server 2012测试通过 mysql5.7.19 x64版本在windows下的一键安装批处理与mysql解压包 1&#xff0c;无需管理员运…

中国人民大学与加拿大女王大学金融硕士——一把开启未来金融世界的金钥匙

在这个日新月异、竞争激烈的时代&#xff0c;每个人都渴望不断提升自我&#xff0c;以应对不断变化的世界。在当今的金融领域&#xff0c;国际化的视野和多元化的知识结构变得越来越重要。如何才能掌握未来世界的金钥匙呢&#xff1f;其实&#xff0c;这把金钥匙并非遥不可及&a…

北斗升级轨道检查,高精度定位确保铁路畅通无阻

北斗升级轨道检查&#xff0c;高精度定位确保铁路畅通无阻 随着科技的发展&#xff0c;北斗卫星系统在各个领域的应用越来越广泛。其中&#xff0c;在铁路轨道检查中&#xff0c;北斗卫星技术的应用不仅能够提高效率&#xff0c;减少人力成本&#xff0c;还能够保障安全&#x…

变压器试验VR虚拟仿真操作培训提升受训者技能水平

VR电气设备安装模拟仿真实训系统是一种利用虚拟现实技术来模拟电气设备安装过程的培训系统。它能够为学员提供一个真实、安全、高效的学习环境&#xff0c;帮助他们更好地掌握电气设备的安装技能。 华锐视点采用VR虚拟现实技术、MR混合现实技术、虚拟仿真技术、三维建模技术、人…

css布局grid-template-columns属性

grid-template-columns属性可以用于创建一个网格布局&#xff0c;以下是使用此属性的步骤&#xff1a; 首先在网格容器上使用 display: grid; 将其转换为网格容器。 然后使用 grid-template-columns 属性来定义列的数量和大小。例如&#xff0c;使用 “grid-template-columns:…

私域流量搭建与运营,技巧全攻略!

2023年是比拼运营深度和服务效率的一年&#xff0c;用户对于体验的期望值将持续增长&#xff0c;企业需提供无缝的客户体验&#xff0c;以推动增长、保障收入、确保客户忠诚度。在疫情新常态下&#xff0c;企业已构建起APP、小程序等一系列线上触点矩阵&#xff0c;而各个触点之…

承载AI计算的数据中心网络和传统数据中心有何不同?

生成式AI正在风靡全球&#xff0c;不少企业开始研究如何在其业务流程中采用人工智能技术&#xff0c;更有一些企业客户开始考虑在数据中心和私有云中部署自己的AIGC和 GPU 扩展网络。从网络角度来看&#xff0c;用于承载这类业务的数据中心与传统的数据中心有很大不同&#xff…

Qt6远程连接MySQL数据库(简单易上手版)

在对照文章开始操作之前&#xff0c;MySQL 和 Navicat 的安装配置要自己提前弄好。 步骤1&#xff1a; 在电脑桌面任务栏中的搜索框中输入 mysql&#xff0c;找到名为&#xff1a;MySQL 8.0 Command Line Client&#xff0c;然后打开。 步骤2&#xff1a; 输入密码后回车&a…

从Hugging Face下载数据测试whisper、fast_whisper耗时

时长比较短的音频&#xff1a;https://huggingface.co/datasets/PolyAI/minds14/viewer/en-US 时长比较长的音频&#xff1a;https://huggingface.co/datasets/librispeech_asr?row8 下载数据集 from datasets import load_datasetminds_14 load_dataset("PolyAI/mind…

找短视频素材就上这8个网站

找短视频素材就上这8个网站&#xff0c;视频剪辑、自媒体必备&#xff0c;质量高&#xff0c;还能免费下载&#xff0c;赶紧收藏&#xff01; 1、菜鸟图库 https://www.sucai999.com/video.html?vNTYxMjky 菜鸟图库网素材非常丰富&#xff0c;网站主要还是以设计类素材为主&…

行业追踪,重构代码,把数据库数据搞坏了

自动复盘 2023-11-06 最近行情好&#xff0c;又有动力搞了&#xff0c;重构了数据库方面的代码&#xff0c;力求更快更稳定的更新数据&#xff0c;结果把数据库数据搞坏了&#xff0c;图有点问题。 凡所有相&#xff0c;皆是虚妄。若见诸相非相&#xff0c;即见如来。 k 线图是…

Win10 + VS017 编译SQLite3.12.2源码

参考&#xff1a; [1] WIN10 VS2019下编译GDAL3.0PROJ6SQLite_gdal 3 win10编译-CSDN博客 [2] 如何编译SQLite-How To Compile SQLite-CSDN博客 如何生成静态库&#xff1a; 参考&#xff1a; WIN10 VS2019下编译GDAL3.0PROJ6SQLite_gdal 3 win10编译-CSDN博客 如何生成exe:…

如何在Linux环境搭建SVN服务器并实现公网访问

文章目录 前言1. Ubuntu安装SVN服务2. 修改配置文件2.1 修改svnserve.conf文件2.2 修改passwd文件2.3 修改authz文件 3. 启动svn服务4. 内网穿透4.1 安装cpolar内网穿透4.2 创建隧道映射本地端口 5. 测试公网访问6. 配置固定公网TCP端口地址6.1 保留一个固定的公网TCP端口地址6…

C++项目的一些环境配置

今天学习下OpenCV&#xff0c;环境配置顺便理一下&#xff1a; 1.用到外部的C文件要在&#xff1a;项目的属性页->VC目录->包含目录&#xff0c;添加相应的路径 2.用到外部的库文件需要在&#xff1a;项目的属性页->VC目录->库目录&#xff0c;添加相应的路径&…

Ceph文件存储

1、存储基础 //单机存储设备 ●DAS&#xff08;直接附加存储&#xff0c;是直接接到计算机的主板总线上去的存储&#xff09; IDE、SATA、SCSI、SAS、USB 接口的磁盘 所谓接口就是一种存储设备驱动下的磁盘设备&#xff0c;提供块级别的存储 ●NAS&#xff08;网络附加存储&…

人,要懂得享受孤独

喜欢在如水的月光下&#xff0c;望一轮洁白的皓月&#xff0c; 喜欢在清寂的夜晚&#xff0c;看那星光流转倏忽间的变幻&#xff0c;牵动心中万千情怀。 独享这份清幽&#xff0c;遐想那月中寻桂子的浪漫。 这个世界太喧闹&#xff0c;偶尔&#xff0c;需要关一关窗&#xff0c…

当阿里云上的Saleforce,遇到瓴羊

基于Salesforce成熟的产品及瓴羊贯穿企业经营全链路的数字化产品能力&#xff0c;双方将共同为中国客户提供优质的数字化解决方案和全面的服务保障。 来源|瓴羊DaaS 新的变化正在发生。 上周四&#xff0c;为期三天的全球顶级科技盛会云栖大会202在浙江杭州落下落幕。本次活…

古典舞学习的独舞与群舞,古典舞的成品舞蹈教学大全

一、教程描述 本套教程的古典舞是很全面的&#xff0c;不仅有舞蹈动作分解教学&#xff0c;而且有成品舞的完整教学&#xff0c;同时提供独立的背景音乐文件&#xff0c;可以让你更快地学会古典舞。本套教程&#xff0c;大小30.54G&#xff0c;共有276个文件。 二、教程目录 …

Crypto(8) BUUCTF-bbbbbbrsa1

题目描述&#xff1a; from base64 import b64encode as b32encode from gmpy2 import invert,gcd,iroot from Crypto.Util.number import * from binascii import a2b_hex,b2a_hex import randomflag "******************************"nbit 128p getPrime(nbit)…