AMD GPU 内核驱动分析(三)-dma-fence 同步工作模型

news2024/12/23 9:59:21

在Linux Kernel 的AMDGPU驱动实现中,dma-fence扮演着重要角色,AMDGPU的Render/解码操作可能涉及到多个方面同时引用buffer的情况,以渲染/视频解码场景为例,应用将渲染/解码命令写入和GPU共享的BUFFER之后,需要将任务提交给GPU的运行队列等待执行,这个时候,应用不仅要监听buffer中任务的执行情况,还要保证在任务执行完毕之前,不能再去修改BUFFER中的数据。而AMDGPU也需要排它性地执行BUFFER命令。在GPU执行结束,应用希望及时得到执行完的信息,以便回收BUFFER重新利用,通知一般由绑定到某个BUFFER上的中断完成。这些操作,如果使用经典的共享BUFFER+锁的方式进行保护和同步,不但效率十分低下,而且各类内核机制杂糅在一起,缺乏一个统一的管理框架,使代码难以维护。 dma-fence则提供了一套简单便捷的机框架,将原子操作,休眠唤醒,同步/异步事件通知等功能嵌入到各种类型的BUFFER管理框架中,将各类机制有机的结合在一起,减少了用户态空间的忙等,使buffer使用更加智能高效。

以AMDGPU解码视频为例,利用dma-fence ring buffer隔离了应用解码请求和解码任务任务本身,提交任务和解码完成通知均通过BUFFER绑定的dma-fence进行交互:

为了便于分析,我把5.4内核中AMDGPU这部分的实现提取出来,写了一个可以独立加载执行的内核模块demo,类似于CMODEL,任务提交上下文和解码完成通知上下文在AMDGPU驱动中分别用内核线程和中断实现,在demo中则全部用内核线程实现.

工作模型如下图所示:

fence array有256个slot,每个位置代表一个fence.构成一个RING BUFFER。sync_seq为写指针序号,sync_seq只增不减,sync_seq mod 256为在array中的index,相当于RING BUFFER的写指针。

last_seq代表当前已经消耗掉(已完成处理的和fence绑定的buffer)的序号,同sync_seq一样,它也是单调递增的自然序列,last_seq mod 256 为在array中的index,相当于ring buffer的读指针。

而fence_seq则表示写位置的一个抽样,这是一个在AMDGPU中的hardware ring是所有进程共享的,每个进程随时可能会提交新的渲染任务,所以sync_seq时刻在更新,不适合处理。fence_seq则是某个时间点的快照,在last_seq和fence_seq之间的buffer(fence),都会在下一批中一并得到signal.

当提交过快,但是消耗较慢时,写方追上读方,此时需要进行生产-消费之间的同步,读端将会反压到写端,写端调用dma_fence_wait进行同步:

当消费方完成下一个任务后,释放当前slot,并且调用dma_fence_signal唤醒写线程,写线程受到信号后,继续执行,向fence array中填充新的任务:

dma fence 的free

拔出萝卜带出泥,通过job fence的free触发schedule fence和finish fence的释放,释放是通过引用计数归零触发的回调,在回调用中利用RCU异步释放FENCE占用的空间。

以上就是fence array的使用过程中的几个corner case.

附code:

#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/types.h>
#include <linux/spinlock.h>
#include <linux/module.h>
#include <linux/fs.h>
#include <linux/errno.h>
#include <linux/mm.h>
#include <linux/sched/signal.h>
#include <linux/dma-fence.h>
#include <linux/slab.h>
#include <linux/delay.h>
#include <linux/kthread.h>
#include <linux/sched.h>
#include <uapi/linux/sched/types.h>

#define assert(expr)   \
        if (!(expr)) { \
                printk( "Assertion failed! %s,%s,%s,line=%d\n",\
                                #expr,__FILE__,__func__,__LINE__); \
                BUG(); \
        }

#define num_hw_submission               128
struct fence_driver {
	uint64_t                        gpu_addr;
	volatile uint32_t               *cpu_addr;
	uint32_t                        sync_seq;
	atomic_t                        last_seq;
	bool                            initialized;
	bool                            initialized_emit;
	bool                            initialized_recv;
	unsigned                        num_fences_mask;
	spinlock_t                      lock;
	struct dma_fence                **fences;
	struct mutex                    mutex;
	struct dma_fence                *last_fence;
	struct timer_list               timer;
	struct timer_list               work_timer;
	wait_queue_head_t               job_scheduled;
};

struct fence_set {
	struct dma_fence                scheduled;
	struct dma_fence                finished;
	spinlock_t                      lock;
};

struct job_fence {
	struct dma_fence                job;
	void                            *data;
};

static uint32_t fence_seq;
static struct fence_driver *ring;
static struct task_struct *fence_emit_task;
static struct task_struct *fence_recv_task;
static struct kmem_cache *job_fence_slab;
static struct kmem_cache *sched_fence_slab;

static const char *dma_fence_get_name(struct dma_fence *fence)
{
	return "dma-fence-drv";
}

static bool dma_fence_enable_signal(struct dma_fence *fence)
{
	if (!timer_pending(&ring->work_timer)) {
		mod_timer(&ring->work_timer, jiffies + HZ / 10);
	}

	printk("%s line %d, signal fenceno %lld.\n", __func__, __LINE__, fence->seqno);

	return true;
}

void fencedrv_fence_free(struct rcu_head *rcu)
{
	struct fence_set *fs;
	struct dma_fence *f = container_of(rcu, struct dma_fence, rcu);
	struct job_fence *jb = container_of(f, struct job_fence, job);

	assert(jb->data != NULL);

	fs = (struct fence_set *)jb->data;

	// the dma_fence_get must be symmentry with dma_fence_put during all the alive.
	assert((kref_read(&fs->scheduled.refcount) == 1));
	assert((kref_read(&fs->finished.refcount) == 1));

	//dma_fence_put(&fs->scheduled);
	dma_fence_put(&fs->finished);
	kmem_cache_free(job_fence_slab, jb);

	// dump_stack();
}

static void fencedrv_dma_fence_release(struct dma_fence *fence)
{
	// typically usage for dma fence release by rcu.
	call_rcu(&fence->rcu, fencedrv_fence_free);
}

static const struct dma_fence_ops fence_ops = {
	.get_driver_name = dma_fence_get_name,
	.get_timeline_name = dma_fence_get_name,
	.enable_signaling = dma_fence_enable_signal,
	.release = fencedrv_dma_fence_release,
};

static int32_t fencedrv_get_ring_avail(void)
{
	uint32_t read_seq, write_seq;

	do {
		read_seq = atomic_read(&ring->last_seq);
		write_seq = ring->sync_seq;
	} while (atomic_read(&ring->last_seq) != read_seq);

	read_seq &= ring->num_fences_mask;
	write_seq &= ring->num_fences_mask;

	pr_err("%s line %d, read_seq %d, write_seq %d.\n",
	       __func__, __LINE__, read_seq, write_seq);

	if (read_seq <= write_seq) {
		return write_seq - read_seq;
	} else {
		return write_seq + num_hw_submission * 2 - read_seq;
	}
}

static const char *dma_fence_get_name_scheduled(struct dma_fence *fence)
{
	return "dma-fence-scheduled";
}

static const char *dma_fence_get_name_finished(struct dma_fence *fence)
{
	return "dma-fence-finished";
}

static void sched_fence_free(struct rcu_head *head);
static void fencedrv_dma_fence_release_scheduled(struct dma_fence *fence)
{
	struct fence_set *fs = container_of(fence, struct fence_set, scheduled);

	// typically usage for dma fence release by rcu.
	call_rcu(&fs->finished.rcu, sched_fence_free);
}

static void fencedrv_dma_fence_release_finished(struct dma_fence *fence)
{
	struct fence_set *fs = container_of(fence, struct fence_set, finished);
	dma_fence_put(&fs->scheduled);
	//while (1);
}


static const struct dma_fence_ops fence_scheduled_ops = {
	.get_driver_name = dma_fence_get_name_scheduled,
	.get_timeline_name = dma_fence_get_name_scheduled,
	.release = fencedrv_dma_fence_release_scheduled,
};

static const struct dma_fence_ops fence_finished_ops = {
	.get_driver_name = dma_fence_get_name_finished,
	.get_timeline_name = dma_fence_get_name_finished,
	.release = fencedrv_dma_fence_release_finished,
};

static struct fence_set *to_sched_fence(struct dma_fence *f)
{
	if (f->ops == &fence_scheduled_ops) {
		return container_of(f, struct fence_set, scheduled);
	}

	if (f->ops == &fence_finished_ops) {
		return container_of(f, struct fence_set, finished);
	}

	return NULL;
}

static void sched_fence_free(struct rcu_head *head)
{
	struct dma_fence *f = container_of(head, struct dma_fence, rcu);
	struct fence_set *fs = to_sched_fence(f);
	if (fs == NULL)
		return;

	assert(f == &fs->finished);

	kmem_cache_free(sched_fence_slab, fs);
	dump_stack();
}

static struct fence_set *init_fence_set(void)
{
	struct fence_set *fs = kmem_cache_alloc(sched_fence_slab, GFP_KERNEL);
	if (fs == NULL) {
		pr_err("%s line %d, alloc fence set from fence set slab failure.\n",
		       __func__, __LINE__);
		return NULL;
	}

	spin_lock_init(&fs->lock);

	dma_fence_init(&fs->scheduled, &fence_scheduled_ops, &fs->lock, 0, 0);
	dma_fence_init(&fs->finished, &fence_finished_ops, &fs->lock, 0, 0);

	return fs;
}

// ref amdgpu_fence_process
static int fence_recv_task_thread(void *data)
{
	struct sched_param sparam = {.sched_priority = 1};
	sched_setscheduler(current, SCHED_FIFO, &sparam);

	//mutex_lock(&ring->mutex);
	while (ring->initialized == false) {
		set_current_state(TASK_UNINTERRUPTIBLE);
		if (ring->initialized == true) {
			break;
		}
		//mutex_unlock(&ring->mutex);
		schedule();
		//mutex_lock(&ring->mutex);
	}

	set_current_state(TASK_RUNNING);
	//mutex_unlock(&ring->mutex);

	while (!kthread_should_stop() && ring->initialized_recv == true) {
		uint32_t seqno_next = 0;
		uint32_t seq, last_seq;
		int r;

		do {
			// last_seq is the read pointer of fence ring buffer.
			last_seq = atomic_read(&ring->last_seq);
			seq = *ring->cpu_addr;

			if (kthread_should_stop())
				return 0;
		} while (atomic_cmpxchg(&ring->last_seq, last_seq, seq) != last_seq);

		if (del_timer(&ring->work_timer) &&
		    seq != ring->sync_seq) {
			mod_timer(&ring->work_timer, jiffies + HZ / 10);
		}

		//printk("%s line %d, last_seq %d, seq %d, sync_seq %d.\n", __func__, __LINE__, last_seq, seq, ring->sync_seq);

		if (unlikely(seq == last_seq)) {
			msleep(10);
			continue;
		}

		assert(seq > last_seq);

		last_seq &= ring->num_fences_mask;
		seq &= ring->num_fences_mask;

		//printk("%s line %d, last_seq %d, seq %d, sync_seq %d.\n", __func__, __LINE__, last_seq, seq, ring->sync_seq);
		do {
			struct dma_fence *fence, **ptr;

			++last_seq;
			last_seq &= ring->num_fences_mask;
			ptr = &ring->fences[last_seq];
			fence = rcu_dereference_protected(*ptr, 1);

			RCU_INIT_POINTER(*ptr, NULL);
			if (!fence) {
				continue;
			}

			if (seqno_next == 0 || seqno_next == fence->seqno) {
				seqno_next = fence->seqno + 1;
			} else { /*if (seqno_next != 0 && seqno_next != fence->seqno)*/
				pr_err("%s line %d, seqno is not continue, exptect %d, actual %lld.\n",
				       __func__, __LINE__, seqno_next, fence->seqno);
			}

			printk("%s line %d, last_seq/slot %d, seq %d, signal %lld.\n",
			       __func__, __LINE__, last_seq, seq, fence->seqno);

			if (list_empty(&fence->cb_list)) {
				printk("%s line %d, fence cb list is empty.\n",
				       __func__, __LINE__);
			} else {
				printk("%s line %d, fence cb list is not empty.\n",
				       __func__, __LINE__);
			}

			r = dma_fence_signal(fence);
			if (kthread_should_stop()) {
				dma_fence_put(fence);
				return 0;
			}

			if (r) {
				pr_err("%s line %d, fence already signaled.\n",
				       __func__, __LINE__);
				continue;
				//BUG();
			}

			dma_fence_put(fence);
		} while (last_seq != seq);

		wake_up(&ring->job_scheduled);
	}

	set_current_state(TASK_RUNNING);

	return 0;
}

// ref amdgpu_fence_emit.
static int fence_emit_task_thread(void *data)
{
	int r;
	uint64_t oldwaitseqno = 0;
	struct sched_param sparam = {.sched_priority = 1};

	sched_setscheduler(current, SCHED_FIFO, &sparam);

	//mutex_lock(&ring->mutex);
	while (ring->initialized == false) {
		set_current_state(TASK_UNINTERRUPTIBLE);
		if (ring->initialized == true) {
			break;
		}

		//mutex_unlock(&ring->mutex);
		schedule();
		//mutex_lock(&ring->mutex);
	}

	set_current_state(TASK_RUNNING);
	//mutex_unlock(&ring->mutex);

	while (!kthread_should_stop() && ring->initialized_emit == true) {
#if 0
		msleep(1000);
		printk("%s line %d.\n", __func__, __LINE__);
#else
		struct dma_fence __rcu **ptr;
		struct job_fence *fence;
		uint32_t seq;
		struct fence_set *fs = init_fence_set();

		fence = kmem_cache_alloc(job_fence_slab, GFP_KERNEL);
		if (fence == NULL) {
			pr_err("%s line %d, alloc fence from fence slab failure.\n",
			       __func__, __LINE__);
			return -1;
		}

		// ring->sync_seq is fence ring write pointer.
		seq = ++ring->sync_seq;
		dma_fence_init(&fence->job, &fence_ops, &ring->lock, 0, seq);
		fence->data = fs;

		ptr = &ring->fences[seq & ring->num_fences_mask];

		//printk("%s line %d, seq = %d.\n", __func__, __LINE__, seq);

		if (kthread_should_stop()) {
			// will call fence_ops.release directly to free the fence.
			dma_fence_put(&fence->job);
			continue;
		}

		if (unlikely(rcu_dereference_protected(*ptr, 1))) {
			struct dma_fence *old;
			int diff;

			rcu_read_lock();
			old = dma_fence_get_rcu_safe(ptr);
			rcu_read_unlock();

			if (old) {
				mutex_lock(&ring->mutex);
				//dma_fence_get(old);
				ring->last_fence = old;
				mutex_unlock(&ring->mutex);

				r = dma_fence_wait(old, false);

				mutex_lock(&ring->mutex);
				ring->last_fence = NULL;
				dma_fence_put(old);
				mutex_unlock(&ring->mutex);

				if (kthread_should_stop() || r) {
					// will call fence_ops.release directly to free the fence.
					dma_fence_put(&fence->job);
					continue;
				}

				// if overlap happened, there must be a congruences on seq and old->seqno,which means seq≡ old->seqno mod(num_hw_submission * 2)
				// this implies seq = q*(num_hw_submission * 2) + old->seqno. q=1 typically.
				diff = seq - old->seqno;
				printk("%s line %d, fence wokenup, wakeseqno %lld, new adding seq %d, slot %d, diff %d, waken interval %lld, latestseq %d, avail %d.\n",
				       __func__, __LINE__, old->seqno, seq, seq & ring->num_fences_mask, diff, old->seqno - oldwaitseqno,
				       ring->sync_seq, fencedrv_get_ring_avail());

				if (diff != num_hw_submission * 2) {
					pr_err("%s line %d, fatal error, diff not match totoal ring.\n",
					       __func__, __LINE__);
				}

				oldwaitseqno = old->seqno;
			}
		}

#if 0
		printk("%s line %d, fence emit, seqno %lld, seq %d, slot %d.\n",
		       __func__, __LINE__, fence->seqno, seq, seq & ring->num_fences_mask);
#endif
		rcu_assign_pointer(*ptr, dma_fence_get(&fence->job));

		// because no outer usage of fence, so put it here for free ok.
		dma_fence_put(&fence->job);
#endif
	}

	set_current_state(TASK_RUNNING);
	return 0;
}

void work_timer_fn(struct timer_list *timer)
{
	uint32_t seqno_next = 0;
	uint32_t seq, last_seq;
	int r;

	do {
		last_seq = atomic_read(&ring->last_seq);
		seq = *ring->cpu_addr;
	} while (atomic_cmpxchg(&ring->last_seq, last_seq, seq) != last_seq);

	if (unlikely(seq == last_seq)) {
		goto end;
	}

	assert(seq > last_seq);

	last_seq &= ring->num_fences_mask;
	seq &= ring->num_fences_mask;

	do {
		struct dma_fence *fence, **ptr;

		++last_seq;
		last_seq &= ring->num_fences_mask;
		ptr = &ring->fences[last_seq];
		fence = rcu_dereference_protected(*ptr, 1);

		RCU_INIT_POINTER(*ptr, NULL);
		if (!fence) {
			continue;
		}

		if (seqno_next == 0 || seqno_next == fence->seqno) {
			seqno_next = fence->seqno + 1;
		} else { /*if (seqno_next != 0 && seqno_next != fence->seqno)*/
			pr_err("%s line %d, seqno is not continue, exptect %d, actual %lld.\n",
			       __func__, __LINE__, seqno_next, fence->seqno);
		}

		r = dma_fence_signal(fence);
		if (r) {
			pr_err("%s line %d, fence already signaled.\n",
			       __func__, __LINE__);
			continue;
			//BUG();
		}

		dma_fence_put(fence);
	} while (last_seq != seq);
end:
	pr_err("%s line %d, work timer triggerd.\n", __func__, __LINE__);
	mod_timer(timer, jiffies + HZ / 10);
}

void gpu_process_thread(struct timer_list *timer)
{
	uint32_t seq, oldseq;

	seq = ring->sync_seq;
	oldseq = fence_seq;

	// trigger a job done on device.
	if (fence_seq == 0) {
		if (seq > 6)
			fence_seq = seq - 4;
	} else if ((seq - fence_seq) > 10) {
		fence_seq += (seq - fence_seq) / 2;
		assert(fence_seq > oldseq);
	}

	printk("%s line %d, timer trigger job, latest consume fence %d.\n",
	       __func__, __LINE__, fence_seq);

	mod_timer(timer, jiffies + HZ / 2);
}

static int fencedrv_wait_empty(void)
{
	uint64_t seq = READ_ONCE(ring->sync_seq);
	struct dma_fence *fence, **ptr;
	int r;

	if (!seq)
		return 0;

	fence_seq = seq;
	ptr = &ring->fences[seq & ring->num_fences_mask];
	rcu_read_lock();

	fence = rcu_dereference(*ptr);
	if (!fence || !dma_fence_get_rcu(fence)) {
		rcu_read_unlock();
		return 0;
	}
	rcu_read_unlock();

	r = dma_fence_wait(fence, false);

	printk("%s line %d, wait last fence %lld, seq %lld, r %d.\n", \
	       __func__, __LINE__, fence->seqno, seq, r);
	dma_fence_put(fence);

	return r;
}

static int __init fencedrv_init(void)
{
	if ((num_hw_submission & (num_hw_submission - 1)) != 0) {
		pr_err("%s line %d, num_hw_submission must be power of two.\n",
		       __func__, __LINE__);
		return -1;
	}

	ring = kzalloc(sizeof(*ring), GFP_KERNEL);
	if (ring == NULL) {
		pr_err("%s line %d, alloc fence driver failure.\n",
		       __func__, __LINE__);
		return -ENOMEM;
	}

	// fence_seq is a snap shot of sync_seq for deal with fence batchlly.
	ring->cpu_addr = &fence_seq;
	ring->gpu_addr = (uint64_t)&fence_seq;
	ring->sync_seq = 0;
	atomic_set(&ring->last_seq, 0);
	ring->initialized = false;
	ring->initialized_emit = false;
	ring->initialized_recv = false;
	ring->last_fence = NULL;
	ring->num_fences_mask = num_hw_submission * 2 - 1;
	init_waitqueue_head(&ring->job_scheduled);

	spin_lock_init(&ring->lock);
	ring->fences = kcalloc(num_hw_submission * 2, sizeof(void *), GFP_KERNEL);
	if (!ring->fences) {
		pr_err("%s line %d, alloc fence buffer failure.\n",
		       __func__, __LINE__);
		return -ENOMEM;
	}

	printk("%s line %d, fence mask 0x%x, num_hw_submission 0x%x.\n",
	       __func__, __LINE__, ring->num_fences_mask, num_hw_submission);

	job_fence_slab = kmem_cache_create("job_fence_slab", sizeof(struct job_fence), 0,
	                                   SLAB_HWCACHE_ALIGN, NULL);
	if (!job_fence_slab) {
		pr_err("%s line %d, alloc job_fence_slab falure.\n",
		       __func__, __LINE__);
		return -ENOMEM;
	}

	sched_fence_slab = kmem_cache_create("sched_fence_slab", sizeof(struct fence_set), 0,
	                                     SLAB_HWCACHE_ALIGN, NULL);
	if (!sched_fence_slab) {
		pr_err("%s line %d, alloc sched_fence_slab falure.\n",
		       __func__, __LINE__);
		return -ENOMEM;
	}

	mutex_init(&ring->mutex);

	fence_emit_task = kthread_run(fence_emit_task_thread, NULL, "fence_emit");
	if (IS_ERR(fence_emit_task)) {
		pr_err("%s line %d, create fence emit tsk failure.\n",
		       __func__, __LINE__);
		return -1;
	}

	fence_recv_task = kthread_run(fence_recv_task_thread, NULL, "fence_recv");
	if (IS_ERR(fence_recv_task)) {
		pr_err("%s line %d, create fence recv tsk failure.\n",
		       __func__, __LINE__);
		return -1;
	}

	timer_setup(&ring->timer, gpu_process_thread, TIMER_IRQSAFE);
	add_timer(&ring->timer);
	mod_timer(&ring->timer, jiffies + HZ / 2);

	timer_setup(&ring->work_timer, work_timer_fn, TIMER_IRQSAFE);
	add_timer(&ring->work_timer);
	mod_timer(&ring->work_timer, jiffies + HZ / 10);

	printk("%s line %d, module init.\n", __func__, __LINE__);

	ring->initialized = true;
	ring->initialized_emit = true;
	ring->initialized_recv = true;

	wake_up_process(fence_emit_task);
	wake_up_process(fence_recv_task);

	return 0;
}

static void __exit fencedrv_exit(void)
{
	printk("%s line %d, module unload task begin.\n", __func__, __LINE__);

	del_timer_sync(&ring->work_timer);

	mutex_lock(&ring->mutex);
	if ((ring->last_fence != NULL) &&
	    (!test_bit(DMA_FENCE_FLAG_SIGNALED_BIT, &ring->last_fence->flags))) {
		ring->initialized_emit = false;
		dma_fence_signal(ring->last_fence);
		dma_fence_put(ring->last_fence);
	}
	mutex_unlock(&ring->mutex);

	kthread_stop(fence_emit_task);
	printk("%s line %d, module unload task mid.\n", __func__, __LINE__);

	del_timer_sync(&ring->timer);
	fencedrv_wait_empty();

	printk("%s line %d, sync wait avail %d.\n", __func__, __LINE__, fencedrv_get_ring_avail());

	wait_event_killable(ring->job_scheduled, fencedrv_get_ring_avail() <= 1);
	ring->initialized_recv = false;
	kthread_stop(fence_recv_task);

	printk("%s line %d, module unload task end.\n", __func__, __LINE__);

	ring->initialized = false;
	rcu_barrier();
	kmem_cache_destroy(job_fence_slab);
	kmem_cache_destroy(sched_fence_slab);
	kfree(ring->fences);
	kfree(ring);

	fence_emit_task = NULL;
	fence_recv_task = NULL;

	printk("%s line %d, module unload.\n", __func__, __LINE__);
}

module_init(fencedrv_init);
module_exit(fencedrv_exit);
MODULE_LICENSE("GPL");
MODULE_AUTHOR("czl");

参考博客

Fence_dma fence_笔落梦昙的博客-CSDN博客


结束

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1059732.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

记录UNIAPP打包苹果iOS·APP

用到生成的四个文件:1-1.CSR证书文件、2-2.CER证书文件、3-3.PP文件【证书Profiles文件】、4-4.P12文件【证书私钥】 1. 生成CSR证书文件: 2. 操作苹果后台:Sign In - Applehttps://developer.apple.com/account/resources/certificates/list

高效的开发流程搭建

目录 1. 搭建 AI codebase 环境kaggle的服务器1. 搭建 AI codebase 环境 python 、torch 以及 cuda版本,对AI的影响最大。不同的版本,可能最终计算出的结果会有区别。 硬盘:PCIE转SSD的卡槽,, GPU: 软件源: Anaconda: 一定要放到固态硬盘上。 VS code 的 debug功能…

嵌入式Linux应用开发-驱动大全-同步与互斥④

嵌入式Linux应用开发-驱动大全-同步与互斥④ 第一章 同步与互斥④1.5 自旋锁spinlock的实现1.5.1 自旋锁的内核结构体1.5.2 spinlock在UP系统中的实现1.5.3 spinlock在SMP系统中的实现 1.6 信号量semaphore的实现1.6.1 semaphore的内核结构体1.6.2 down函数的实现1.6.3 up函数的…

关于将对象转成JSON格式的一些问题

1.问题现象&#xff1a; 在ssm项目中&#xff0c;一个controller返回Msg对象&#xff08;自定义Javabean对象&#xff09;&#xff0c;然后利用SpringMVC的ResponseBody注解自动将Msg对象转化成JSON格式&#xff0c;返回给客户端&#xff0c;但是客户端接收到的json字符串只有…

「专题速递」数字人直播带货、传统行业数字化升级、远程协作中的低延时视频、地产物业中的通讯终端...

音视频技术作为企业数字化转型的核心要素之一&#xff0c;已在各行各业展现出广泛的应用和卓越的价值。实时通信、社交互动、高清视频等技术不仅令传统行业焕发新生&#xff0c;还为其在生产、管理、服务提供与维护等各个领域带来了巨大的助力&#xff0c;实现了生产效率和服务…

打字速度测试,生成您的打字速度证书?

趁着十一国庆之际&#xff0c;开发完成了打字侠的速度测试功能。我自己的打字速度约为56字/分钟&#xff0c;算是盲打中速度比较快的。下面是我的打字荣誉证书&#xff0c;欢迎大家免费测试自己的打字速度。 你也想来测试一下自己的打字速度吗&#xff1f; 打字侠速度测试地址…

2023最新简易ChatGPT3.5小程序全开源源码+全新UI首发+实测可用可二开(带部署教程)

源码简介&#xff1a; 2023最新简易ChatGPT3.5小程序全开源源码全新UI首发&#xff0c;实测可以用&#xff0c;而且可以二次开发。这个是最新ChatGPT智能AI机器人微信小程序源码&#xff0c;同时也带部署教程。 这个全新版本的小界面设计相当漂亮&#xff0c;简单大方&#x…

「算法小记」-1:Ackermann函数/阿克曼函数的一点思考解法【递归/非递归/堆栈方法】(C++ )

&#x1f60e; 作者介绍&#xff1a;我是程序员洲洲&#xff0c;一个热爱写作的非著名程序员。CSDN全栈优质领域创作者、华为云博客社区云享专家、阿里云博客社区专家博主、前后端开发、人工智能研究生。公粽号&#xff1a;程序员洲洲。 &#x1f388; 本文专栏&#xff1a;本文…

Java架构师角度看架构

目录 1 导学1.1 技术提升依然突破不了职业的瓶颈1.2 技术提升可薪资依然涨不上去1.3 学了架构课程依然觉得自己成长很慢 2 架构的基本认识2.1 什么是架构2.2 为什么要做架构设计 3 深入理解和认识架构。3.1 架构定义的行为。3.2 架构关注系统的主要元素3.3 平衡关注点3.4 架构会…

站长如何能够做到网站的全方位防护呢?

随着互联网的急剧崛起&#xff0c;网站已成为企业塑造品牌形象和吸引潜在客户的首要渠道之一。然而&#xff0c;伴随着这种便捷性&#xff0c;网站安全问题也愈发凸显。DDOS&#xff08;分布式拒绝服务攻击&#xff09;和CC&#xff08;恶意请求攻击&#xff09;攻击成为了黑客…

ROS基础

E: Unable to locate package ros-kinetic-turtle-tf ROS Kinetic 学习笔记 (古月居) https://www.bilibili.com/video/BV1hc411n7N7/ 一、认识ROS 大纲 ROS的总体设计 系统实现 三个层次 1 主要是话题、服务通信模型的实现&#xff1b; 话题&#xff1a; RPC介绍&#…

顾樵 量子力学I 导读(1)

波函数与薛定谔方程 薛定谔方程的获得 经典电磁波理论与德布罗意关系 波函数的性质 波函数是平方可积函数&#xff08;归一化条件&#xff09;波函数和波函数的导数是连续的波函数的单值的波函数在势场奇点以外的地方连续力学量的平均值与期待值 粒子动量的期望值Ehrenfests th…

采用python中的opencv2的库来运用机器视觉移动物体

一. 此次我们来利用opencv2来进行机器视觉的学习 1. 首先我们先来进行一个小的案例的实现. 这次我们是将会进行一个小的矩形手势的移动. import cv2 from cvzone.HandTrackingModule import HandDetectorcap cv2.VideoCapture(0) # cap.set(3, 1280) # cap.set(4, 720) col…

1.1 数据库系统概述

思维导图&#xff1a; 前言&#xff1a; **数据库前言笔记&#xff1a;** 1. **数据库的价值** - 数据管理的高效工具 - 计算机科学的关键分支 2. **信息资源的重要性** - 现代企业或组织的生存和发展关键 - 建立有效的信息系统至关重要 3. **数据库的应用范围**…

Vue中如何进行移动端手势操作

当开发移动端应用程序时&#xff0c;手势操作是提高用户体验的关键部分之一。Vue.js是一个流行的JavaScript框架&#xff0c;它提供了一种简单而强大的方式来实现移动端手势操作。本文将介绍如何在Vue.js中进行移动端手势操作&#xff0c;包括基本手势&#xff0c;如点击、滑动…

闲聊四种旅游方式

十一长假&#xff0c;先不写那些需要深度思考的话题&#xff0c;先写点轻松的。 关于旅游方式&#xff0c;其实也是受梁斌博士一条微博的一些触动&#xff0c;他说他认识个朋友&#xff0c;自由职业&#xff0c;到处旅游&#xff0c;却从不旺季出行&#xff0c;非常省钱&#x…

关系型数据库设计理论及部署实现

ACID 索引实现方式 事务隔离级别 并发场景 写-写冲突 MVCC 数据库隐式字段 读视图 删表语句 insert与replace区别 Mysql相关参数 索引扫描方式 索引下推 复制日志 基于操作语句复制 基于预写日志(WAL)复制 基于行的逻辑日志复制 基于触发器的复制 主从同步 多主复制 Mysql备份 …

關聯式資料庫模型The relational data model

RELATIONAL MODEL關係模型 結構化查詢語言&#xff08;SQL&#xff09;基礎 foundation of structured query language (SQL) 許多資料庫設計方法的基礎foundation of many database design methodologies 資料庫研究基礎 foundation of database research 關係(Relation) …