volumeWorker()
与claim worker流程一样,从volumeQueue中取数据,也就是取出的都是PV,如果informer中有这个pv,就进入update流程。
- 定义workFunc:首先,定义了一个匿名函数
workFunc
,这个函数是实际执行工作的地方。它返回一个布尔值quit
,指示是否应该退出循环。 - 从队列中获取键:通过
ctrl.claimQueue.Get()
从队列中获取一个键。如果quit
为true
,表示队列已经关闭,应该退出循环。 - 处理键:
- 使用
defer ctrl.claimQueue.Done(keyObj)
确保在函数返回前标记这个键已经处理完毕。
- 使用
- 解析命名空间和名称:使用
cache.SplitMetaNamespaceKey(key)
解析出命名空间和PVC名称。如果解析失败,记录错误日志并返回false
,表示不需要退出循环。 - 尝试从Informer缓存中获取PVC:
- 使用
ctrl.claimLister.PersistentVolumeClaims(namespace).Get(name)
尝试从Informer缓存中获取PVC对象。 - 如果获取成功,说明这个事件是添加、更新或同步事件,调用
ctrl.updateClaim(claim)
更新PVC。 - 如果获取失败但不是“未找到”错误,记录错误日志并返回
false
。
- 使用
- 处理删除事件:
- 如果上一步中PVC不在Informer缓存中,假设这是一个删除事件。
- 使用
ctrl.claims.GetByKey(key)
从本地缓存中获取PVC对象。 - 如果获取失败或PVC不存在于本地缓存中,记录日志并返回
false
。 - 如果获取成功,将对象转换为
*v1.PersistentVolumeClaim
类型,并调用ctrl.deleteClaim(claim)
删除PVC。
- 无限循环:
for { ... }
构造了一个无限循环,不断调用workFunc
处理事件。如果workFunc
返回true
,则记录日志并退出循环。
// volumeWorker processes items from volumeQueue. It must run only once,
// syncVolume is not assured to be reentrant.
func (ctrl *PersistentVolumeController) volumeWorker() {
workFunc := func() bool {
//不断从队列中拿到pv
keyObj, quit := ctrl.volumeQueue.Get()
if quit {
return true
}
defer ctrl.volumeQueue.Done(keyObj)
key := keyObj.(string)
klog.V(5).Infof("volumeWorker[%s]", key)
_, name, err := cache.SplitMetaNamespaceKey(key)
if err != nil {
klog.V(4).Infof("error getting name of volume %q to get volume from informer: %v", key, err)
return false
}
volume, err := ctrl.volumeLister.Get(name)//1/从informercache获取改pV,不需要直接访问api-server
if err == nil {
// The volume still exists in informer cache, the event must have
// been add/update/sync
ctrl.updateVolume(volume)
return false
}
if !errors.IsNotFound(err) {
klog.V(2).Infof("error getting volume %q from informer: %v", key, err)
return false
}
// The volume is not in informer cache, the event must have been
// "delete"
volumeObj, found, err := ctrl.volumes.store.GetByKey(key)//该pv不在cache中,则删掉
if err != nil {
klog.V(2).Infof("error getting volume %q from cache: %v", key, err)
return false
}
if !found {
// The controller has already processed the delete event and
// deleted the volume from its cache
klog.V(2).Infof("deletion of volume %q was already processed", key)
return false
}
volume, ok := volumeObj.(*v1.PersistentVolume)
if !ok {
klog.Errorf("expected volume, got %+v", volumeObj)
return false
}
ctrl.deleteVolume(volume)
return false
}
for {
if quit := workFunc(); quit {
klog.Infof("volume worker queue shutting down")
return
}
}
}
updateVolume()
如果pv没有更新(与缓存中一致),就直接返回处理下一个,如果有更新,执行syncVolume
// updateVolume runs in worker thread and handles "volume added",
// "volume updated" and "periodic sync" events.
func (ctrl *PersistentVolumeController) updateVolume(volume *v1.PersistentVolume) {
// Store the new volume version in the cache and do not process it if this
// is an old version.
new, err := ctrl.storeVolumeUpdate(volume)//更新新的pv
if err != nil {
klog.Errorf("%v", err)
}
if !new {
return
}
err = ctrl.syncVolume(volume)
if err != nil {
if errors.IsConflict(err) {
// Version conflict error happens quite often and the controller
// recovers from it easily.
klog.V(3).Infof("could not sync volume %q: %+v", volume.Name, err)
} else {
klog.Errorf("could not sync volume %q: %+v", volume.Name, err)
}
}
}
syncVolume()
- 处理未使用的
PersistentVolume
:- 如果
volume.Spec.ClaimRef
为nil
,表示这个PersistentVolume
是未使用的。此时,会将其状态更新为VolumeAvailable
。
- 如果
- 处理预绑定的
PersistentVolume
:- 如果
volume.Spec.ClaimRef
不为nil
但ClaimRef.UID
为空,表示这个PersistentVolume
已经被预留给一个PersistentVolumeClaim
(PVC),但尚未绑定。此时,也会将其状态更新为VolumeAvailable
。
- 如果
- 处理已绑定的
PersistentVolume
:- 如果
volume.Spec.ClaimRef.UID
不为空,表示这个PersistentVolume
已经绑定到一个PersistentVolumeClaim
。- 首先,尝试从本地缓存中获取对应的PVC对象。
- 如果本地缓存中没有找到,再尝试从API服务器获取。
- 如果仍然找不到,则认为这个PVC可能已经被删除。
- 如果
- 处理PVC不存在的情况:
- 如果找不到对应的PVC,且
PersistentVolume
的状态不是VolumeReleased
或VolumeFailed
,则将其状态更新为VolumeReleased
,并调用reclaimVolume
方法进行处理。
- 如果找不到对应的PVC,且
- 处理PVC存在但UID不匹配的情况:
- 如果找到的PVC的UID与
PersistentVolume
中保存的UID不匹配,说明原来的PVC已经被删除并重新创建了一个同名的PVC。此时,将claim
设置为nil
,并按照PVC不存在的情况处理。
- 如果找到的PVC的UID与
- 处理PVC和PV正常绑定的情况:
- 如果PVC和PV正常绑定(即
claim.Spec.VolumeName == volume.Name
),则更新PersistentVolume
的状态为VolumeBound
。
- 如果PVC和PV正常绑定(即
- 处理PVC绑定到其他PV的情况:
- 如果PVC绑定到了其他的PV,则需要根据
PersistentVolume
是否是动态分配的以及回收策略来决定如何处理。- 如果是动态分配的且回收策略为
Delete
,则释放并删除这个PersistentVolume
。 - 否则,尝试解除
PersistentVolume
与PVC的绑定。
- 如果是动态分配的且回收策略为
- 如果PVC绑定到了其他的PV,则需要根据
- 处理体积模式不匹配:
- 如果在尝试绑定PVC和PV时发现体积模式不匹配,则记录事件并跳过同步。
- 加速绑定:
- 如果
PersistentVolume
是待绑定的,且不是由控制器绑定的,则将其加入到PVC队列中,以加速绑定过程。
这段代码涵盖了PersistentVolume
生命周期中的多个状态转换和处理逻辑,确保了Kubernetes中持久化存储的正确管理和使用。
- 如果
// syncVolume is the main controller method to decide what to do with a volume.
// It's invoked by appropriate cache.Controller callbacks when a volume is
// created, updated or periodically synced. We do not differentiate between
// these events.
func (ctrl *PersistentVolumeController) syncVolume(volume *v1.PersistentVolume) error {
klog.V(4).Infof("synchronizing PersistentVolume[%s]: %s", volume.Name, getVolumeStatusForLogging(volume))
// Set correct "migrated-to" annotations on PV and update in API server if
// necessary
newVolume, err := ctrl.updateVolumeMigrationAnnotations(volume)
if err != nil {
// Nothing was saved; we will fall back into the same
// condition in the next call to this method
return err
}
volume = newVolume
// [Unit test set 4]
if volume.Spec.ClaimRef == nil {
// Volume is unused
klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is unused", volume.Name)
if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
// Nothing was saved; we will fall back into the same
// condition in the next call to this method
return err
}
return nil
} else /* pv.Spec.ClaimRef != nil */ {
// Volume is bound to a claim.
if volume.Spec.ClaimRef.UID == "" {
// The PV is reserved for a PVC; that PVC has not yet been
// bound to this PV; the PVC sync will handle it.
klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is pre-bound to claim %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
if _, err := ctrl.updateVolumePhase(volume, v1.VolumeAvailable, ""); err != nil {
// Nothing was saved; we will fall back into the same
// condition in the next call to this method
return err
}
return nil
}
klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound to claim %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
// Get the PVC by _name_
var claim *v1.PersistentVolumeClaim
claimName := claimrefToClaimKey(volume.Spec.ClaimRef)
obj, found, err := ctrl.claims.GetByKey(claimName)
if err != nil {
return err
}
if !found {
// If the PV was created by an external PV provisioner or
// bound by external PV binder (e.g. kube-scheduler), it's
// possible under heavy load that the corresponding PVC is not synced to
// controller local cache yet. So we need to double-check PVC in
// 1) informer cache
// 2) apiserver if not found in informer cache
// to make sure we will not reclaim a PV wrongly.
// Note that only non-released and non-failed volumes will be
// updated to Released state when PVC does not exist.
if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
obj, err = ctrl.claimLister.PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(volume.Spec.ClaimRef.Name)
if err != nil && !apierrors.IsNotFound(err) {
return err
}
found = !apierrors.IsNotFound(err)
if !found {
obj, err = ctrl.kubeClient.CoreV1().PersistentVolumeClaims(volume.Spec.ClaimRef.Namespace).Get(context.TODO(), volume.Spec.ClaimRef.Name, metav1.GetOptions{})
if err != nil && !apierrors.IsNotFound(err) {
return err
}
found = !apierrors.IsNotFound(err)
}
}
}
if !found {
klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s not found", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
// Fall through with claim = nil
} else {//找到
var ok bool
claim, ok = obj.(*v1.PersistentVolumeClaim)
if !ok {
return fmt.Errorf("Cannot convert object from volume cache to volume %q!?: %#v", claim.Spec.VolumeName, obj)
}
klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s found: %s", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef), getClaimStatusForLogging(claim))
}
if claim != nil && claim.UID != volume.Spec.ClaimRef.UID {
// The claim that the PV was pointing to was deleted, and another
// with the same name created.
klog.V(4).Infof("synchronizing PersistentVolume[%s]: claim %s has different UID, the old one must have been deleted", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
// Treat the volume as bound to a missing claim.
claim = nil
}
if claim == nil {
// If we get into this block, the claim must have been deleted;
// NOTE: reclaimVolume may either release the PV back into the pool or
// recycle it or do nothing (retain)
// Do not overwrite previous Failed state - let the user see that
// something went wrong, while we still re-try to reclaim the
// volume.
if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
// Also, log this only once:
klog.V(2).Infof("volume %q is released and reclaim policy %q will be executed", volume.Name, volume.Spec.PersistentVolumeReclaimPolicy)
if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
// Nothing was saved; we will fall back into the same condition
// in the next call to this method
return err
}
}
if err = ctrl.reclaimVolume(volume); err != nil {
// Release failed, we will fall back into the same condition
// in the next call to this method
return err
}
if volume.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimRetain {
// volume is being retained, it references a claim that does not exist now.
klog.V(4).Infof("PersistentVolume[%s] references a claim %q (%s) that is not found", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef), volume.Spec.ClaimRef.UID)
}
return nil
} else if claim.Spec.VolumeName == "" {
if pvutil.CheckVolumeModeMismatches(&claim.Spec, &volume.Spec) {
// Binding for the volume won't be called in syncUnboundClaim,
// because findBestMatchForClaim won't return the volume due to volumeMode mismatch.
volumeMsg := fmt.Sprintf("Cannot bind PersistentVolume to requested PersistentVolumeClaim %q due to incompatible volumeMode.", claim.Name)
ctrl.eventRecorder.Event(volume, v1.EventTypeWarning, events.VolumeMismatch, volumeMsg)
claimMsg := fmt.Sprintf("Cannot bind PersistentVolume %q to requested PersistentVolumeClaim due to incompatible volumeMode.", volume.Name)
ctrl.eventRecorder.Event(claim, v1.EventTypeWarning, events.VolumeMismatch, claimMsg)
// Skipping syncClaim
return nil
}
"http://pv.kubernetes.io/bound-by-controller" 的annotation 说明pv、pvc正在绑定中
if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnBoundByController) {
// The binding is not completed; let PVC sync handle it
klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume not bound yet, waiting for syncClaim to fix it", volume.Name)
} else {
// Dangling PV; try to re-establish the link in the PVC sync
klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume was bound and got unbound (by user?), waiting for syncClaim to fix it", volume.Name)
}
// In both cases, the volume is Bound and the claim is Pending.
// Next syncClaim will fix it. To speed it up, we enqueue the claim
// into the controller, which results in syncClaim to be called
// shortly (and in the right worker goroutine).
// This speeds up binding of provisioned volumes - provisioner saves
// only the new PV and it expects that next syncClaim will bind the
// claim to it.
ctrl.claimQueue.Add(claimToClaimKey(claim))
return nil
} else if claim.Spec.VolumeName == volume.Name {
// Volume is bound to a claim properly, update status if necessary
klog.V(4).Infof("synchronizing PersistentVolume[%s]: all is bound", volume.Name)
if _, err = ctrl.updateVolumePhase(volume, v1.VolumeBound, ""); err != nil {
// Nothing was saved; we will fall back into the same
// condition in the next call to this method
return err
}
return nil
} else {
// Volume is bound to a claim, but the claim is bound elsewhere
if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnDynamicallyProvisioned) && volume.Spec.PersistentVolumeReclaimPolicy == v1.PersistentVolumeReclaimDelete {
// This volume was dynamically provisioned for this claim. The
// claim got bound elsewhere, and thus this volume is not
// needed. Delete it.
// Mark the volume as Released for external deleters and to let
// the user know. Don't overwrite existing Failed status!
if volume.Status.Phase != v1.VolumeReleased && volume.Status.Phase != v1.VolumeFailed {
// Also, log this only once:
klog.V(2).Infof("dynamically volume %q is released and it will be deleted", volume.Name)
if volume, err = ctrl.updateVolumePhase(volume, v1.VolumeReleased, ""); err != nil {
// Nothing was saved; we will fall back into the same condition
// in the next call to this method
return err
}
}
if err = ctrl.reclaimVolume(volume); err != nil {
// Deletion failed, we will fall back into the same condition
// in the next call to this method
return err
}
return nil
} else {
// Volume is bound to a claim, but the claim is bound elsewhere
// and it's not dynamically provisioned.
if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnBoundByController) {
// This is part of the normal operation of the controller; the
// controller tried to use this volume for a claim but the claim
// was fulfilled by another volume. We did this; fix it.
klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by controller to a claim that is bound to another volume, unbinding", volume.Name)
if err = ctrl.unbindVolume(volume); err != nil {
return err
}
return nil
} else {
// The PV must have been created with this ptr; leave it alone.
klog.V(4).Infof("synchronizing PersistentVolume[%s]: volume is bound by user to a claim that is bound to another volume, waiting for the claim to get unbound", volume.Name)
// This just updates the volume phase and clears
// volume.Spec.ClaimRef.UID. It leaves the volume pre-bound
// to the claim.
if err = ctrl.unbindVolume(volume); err != nil {
return err
}
return nil
}
}
}
}
}
reclaimVolume()
这段Go代码是Kubernetes中PersistentVolumeController(PV控制器)的一部分,用于处理PersistentVolume(PV)的回收操作。PersistentVolume是Kubernetes中的一种存储资源,用于为集群中的Pod提供持久化存储。每个PV都有一个回收策略(PersistentVolumeReclaimPolicy),用于定义当PV不再需要时应该如何处理它。这段代码实现了根据PV的回收策略执行相应的操作。
- 检查PV是否已迁移:
- 首先,代码检查PV的Annotations中是否有
pvutil.AnnMigratedTo
的标记。如果PV已被迁移(即这个标记存在),则函数直接返回nil
,表示不需要进行任何操作,因为迁移后的PV将由外部提供者管理。
- 首先,代码检查PV的Annotations中是否有
- 根据回收策略执行操作:
- 接下来,代码根据PV的
Spec.PersistentVolumeReclaimPolicy
字段的值,决定如何回收PV。回收策略有三种:Retain
、Recycle
和Delete
。 - Retain:
- 如果策略是
Retain
,则记录一条日志信息,表示不需要对PV进行任何操作。Retain
策略意味着PV在释放后不会被自动删除或清理,管理员需要手动处理它。
- 如果策略是
- Recycle(已弃用):
- 如果策略是
Recycle
,则记录一条日志信息,并安排一个回收操作。回收操作通过ctrl.scheduleOperation
方法安排,实际执行的是ctrl.recycleVolumeOperation(volume)
函数。注意,Recycle
策略在Kubernetes较新版本中已被弃用,因为它不保证数据的安全性和完整性。
- 如果策略是
- Delete:
- 如果策略是
Delete
,则记录一条日志信息,并为删除操作创建一个时间戳条目(如果尚不存在)。这是通过ctrl.operationTimestamps.AddIfNotExist
方法实现的。然后,安排一个删除操作,实际执行的是ctrl.deleteVolumeOperation(volume)
函数。如果删除操作失败,会记录一个错误指标,但不会立即报告延迟,延迟报告将在PV最终被删除并捕获到删除事件时发生。
- 如果策略是
- 接下来,代码根据PV的
- 处理未知的回收策略:
- 如果PV的回收策略是未知的(即不是
Retain
、Recycle
或Delete
中的任何一个),则更新PV的状态为Failed
,并记录一个警告事件,事件类型为VolumeUnknownReclaimPolicy
,消息为"Volume has unrecognized PersistentVolumeReclaimPolicy"。
- 如果PV的回收策略是未知的(即不是
// reclaimVolume implements volume.Spec.PersistentVolumeReclaimPolicy and
// starts appropriate reclaim action.
func (ctrl *PersistentVolumeController) reclaimVolume(volume *v1.PersistentVolume) error {
if migrated := volume.Annotations[pvutil.AnnMigratedTo]; len(migrated) > 0 {
// PV is Migrated. The PV controller should stand down and the external
// provisioner will handle this PV
return nil
}
switch volume.Spec.PersistentVolumeReclaimPolicy {
case v1.PersistentVolumeReclaimRetain:
klog.V(4).Infof("reclaimVolume[%s]: policy is Retain, nothing to do", volume.Name)
case v1.PersistentVolumeReclaimRecycle:
klog.V(4).Infof("reclaimVolume[%s]: policy is Recycle", volume.Name)
opName := fmt.Sprintf("recycle-%s[%s]", volume.Name, string(volume.UID))
ctrl.scheduleOperation(opName, func() error {
ctrl.recycleVolumeOperation(volume)
return nil
})
case v1.PersistentVolumeReclaimDelete:
klog.V(4).Infof("reclaimVolume[%s]: policy is Delete", volume.Name)
opName := fmt.Sprintf("delete-%s[%s]", volume.Name, string(volume.UID))
// create a start timestamp entry in cache for deletion operation if no one exists with
// key = volume.Name, pluginName = provisionerName, operation = "delete"
ctrl.operationTimestamps.AddIfNotExist(volume.Name, ctrl.getProvisionerNameFromVolume(volume), "delete")
ctrl.scheduleOperation(opName, func() error {
_, err := ctrl.deleteVolumeOperation(volume)
if err != nil {
// only report error count to "volume_operation_total_errors"
// latency reporting will happen when the volume get finally
// deleted and a volume deleted event is captured
metrics.RecordMetric(volume.Name, &ctrl.operationTimestamps, err)
}
return err
})
default:
// Unknown PersistentVolumeReclaimPolicy
if _, err := ctrl.updateVolumePhaseWithEvent(volume, v1.VolumeFailed, v1.EventTypeWarning, "VolumeUnknownReclaimPolicy", "Volume has unrecognized PersistentVolumeReclaimPolicy"); err != nil {
return err
}
}
return nil
}
unbindVolume()
- 深拷贝:通过
volume.DeepCopy()
创建volume
的一个深拷贝volumeClone
。这是为了确保原始volume
对象不会被修改,所有更改都将应用于volumeClone
。 - 检查注解:接下来,代码检查持久卷上是否存在
pvutil.AnnBoundByController
注解。这个注解用来指示持久卷是由控制器绑定的还是由用户预先绑定的。- 如果存在这个注解,说明持久卷是由控制器绑定的。因此,将
ClaimRef
设置为nil
,并删除pvutil.AnnBoundByController
注解。如果删除后注解列表为空,则将注解字段设置为nil
。 - 如果不存在这个注解,说明持久卷是用户预先绑定的。在这种情况下,只清除
ClaimRef
中的UID
字段。
- 如果存在这个注解,说明持久卷是由控制器绑定的。因此,将
- 更新持久卷:使用
ctrl.kubeClient.CoreV1().PersistentVolumes().Update
方法尝试更新Kubernetes API服务器中的持久卷对象。这里使用context.TODO()
作为上下文参数,表示将来可能会提供一个具体的上下文。如果更新失败,记录一条错误日志并返回错误。 - 更新内部缓存:调用
ctrl.storeVolumeUpdate
方法尝试更新控制器的内部缓存中的持久卷对象。如果更新失败,记录一条错误日志并返回错误。 - 记录解绑成功:如果上述步骤都成功,使用
klog.V(4).Infof
记录一条日志,表明持久卷已成功解绑。 - 更新持久卷状态:最后,调用
ctrl.updateVolumePhase
方法将持久卷的状态更新为v1.VolumeAvailable
,表示持久卷现在是可用的,可以被新的持久卷申领绑定。如果更新状态失败,返回错误。
// unbindVolume rolls back previous binding of the volume. This may be necessary
// when two controllers bound two volumes to single claim - when we detect this,
// only one binding succeeds and the second one must be rolled back.
// This method updates both Spec and Status.
// It returns on first error, it's up to the caller to implement some retry
// mechanism.
func (ctrl *PersistentVolumeController) unbindVolume(volume *v1.PersistentVolume) error {
klog.V(4).Infof("updating PersistentVolume[%s]: rolling back binding from %q", volume.Name, claimrefToClaimKey(volume.Spec.ClaimRef))
// Save the PV only when any modification is necessary.
volumeClone := volume.DeepCopy()
if metav1.HasAnnotation(volume.ObjectMeta, pvutil.AnnBoundByController) {
// The volume was bound by the controller.
volumeClone.Spec.ClaimRef = nil
delete(volumeClone.Annotations, pvutil.AnnBoundByController)
if len(volumeClone.Annotations) == 0 {
// No annotations look better than empty annotation map (and it's easier
// to test).
volumeClone.Annotations = nil
}
} else {
// The volume was pre-bound by user. Clear only the binding UID.
volumeClone.Spec.ClaimRef.UID = ""
}
newVol, err := ctrl.kubeClient.CoreV1().PersistentVolumes().Update(context.TODO(), volumeClone, metav1.UpdateOptions{})
if err != nil {
klog.V(4).Infof("updating PersistentVolume[%s]: rollback failed: %v", volume.Name, err)
return err
}
_, err = ctrl.storeVolumeUpdate(newVol)
if err != nil {
klog.V(4).Infof("updating PersistentVolume[%s]: cannot update internal cache: %v", volume.Name, err)
return err
}
klog.V(4).Infof("updating PersistentVolume[%s]: rolled back", newVol.Name)
// Update the status
_, err = ctrl.updateVolumePhase(newVol, v1.VolumeAvailable, "")
return err
}