【android10】【binder】【3.向servicemanager注册服务】

news2024/11/16 3:12:51

系列文章目录

 可跳转到下面链接查看下表所有内容https://blog.csdn.net/handsomethefirst/article/details/138226266?spm=1001.2014.3001.5501文章浏览阅读2次。系列文章大全https://blog.csdn.net/handsomethefirst/article/details/138226266?spm=1001.2014.3001.5501


目录

系列文章目录

1.简介

1.1 时序图

1.1流程介绍

2 meidaservice的启动——源码分析

2.1 获取ProcessState对象

2.1.1 ProcessState::ProcessState

 2.1.2 ProcessState::init

2.1.3 open_driver

2.1.4 binder_open

2.1.5 获取版本号binder_ioctl

2.1.6 设置binder最大线程数binder_ioctl

2.1.7 binder_mmap

2.1.8 binder_update_page_range

 2.2 获取bpServiceManager对象

2.2.1 defaultServiceManager

2.2.2 ProcessState::getContextObject

2.2.3 ProcessState::getStrongProxyForHandle

2.2.4 ProcessState::lookupHandleLocked

2.2.5 BpBinder::create

2.2.6 BpBinder::BpBinder

2.2.7 IPCThreadState::self

2.2.8 IPCThreadState::IPCThreadState

2.2.9 IPCThreadState::incWeakHandle

2.2.10 interface_cast

2.2.11 asInterface

2.2.12 queryLocalInterface

2.2.13 BpServiceManager::BpServiceManager

2.2.14 BpInterface::BpInterface

2.2.15 BpRefBase::BpRefBase

2.3 注册多媒体服务到servicemanager

2.3.1 MediaPlayerService::instantiate

2.3.2 IServiceManager::addService

 2.3.3 writeStrongBinder

2.3.4  flatten_binder

2.3.5 BpBinder::transact

2.3.6 IPCThreadState::transact

2.3.7 IPCThreadState::writeTransactionData

2.3.8 IPCThreadState::waitForResponse向驱动发送请求add服务的消息

2.3.9 IPCThreadState::talkWithDriver

2.3.10 binder_ioctl

2.3.11 binder_thread_write(增加SM对应的句柄0的引用计数)

 2.3.12 binder_thread_write(处理media服务请求add的消息)

2.3.13 binder_transaction

2.4 mediapalyservice服务陷入休眠

2.4.1 binder_ioctl

2.4.2 binder_thread_read

2.4.3 IPCThreadState::talkWithDriver

2.4.4 IPCThreadState::waitForResponse

2.4.5 talkWithDriver第三次循环(同步消息才会有):

2.4.6 binder_ioctl

2.4.7 binder_thread_read

2.5 ServiceManager被唤醒

2.5.1 binder_thread_read

2.5.2 binder_ioctl

2.5.3 binder_loop

2.5.4 binder_parse

2.5.5 bio_init

2.5.6 bio_init_from_txn

2.5.7 svcmgr_handler

2.5.8 bio_get_ref

2.5.9 do_add_service

2.5.10 find_svc

2.5.11 binder_acquire

2.5.12 binder_write

2.5.13 binder_ioctl

2.5.14 binder_thread_write

2.5.16 binder_write

2.5.17 binder_ioctl

2.5.18 binder_thread_write

2.6 Servicemanager服务发送reply消息给meidaservice

2.6.1 bio_put_uint32

2.6.2 bio_alloc

2.6.3 binder_send_reply

2.6.4 binder_write

2.6.5 binder_ioctl

2.6.6 binder_thread_write

2.6.7 binder_transaction

2.7 servicemanager处理未完成事务,并再次陷休眠

2.7.1 binder_loop

2.7.2 binder_ioctl

2.7.3 binder_thread_read

2.7.4 binder_parse

2.7.5 sm主线程再次休眠,等待来自驱动的消息

2.8 唤醒meidaservice服务,meida服务处理reply消息

2.8.1 binder_thread_read

2.8.2 binder_ioctl

2.8.3 talkWithDriver

2.8.4 waitForResponse

2.8.5 Parcel::ipcSetDataReference

2.9 ProcessState::startThreadPool

2.9.1 ProcessState::spawnPooledThread

2.9.2 threadLoop

2.9.3 IPCThreadState::joinThreadPool

2.9.4 IPCThreadState::getAndExecuteCommand

2.9.5 IPCThreadState::talkWithDriver

2.9.6 binder_ioctl

2.9.7 binder_thread_write

2.9.8 死循环用于获取来自驱动的消息

 2.9.9 getAndExecuteCommand

2.9.10 talkWithDriver

2.9.11 binder_ioctl

2.9.12 binder_thread_read

2.10 joinThreadPool

2.10.1 IPCThreadState::joinThreadPool

2.10.2 IPCThreadState::getAndExecuteCommand

2.10.3 IPCThreadState::talkWithDriver

2.10.4 binder_ioctl

2.10.5 binder_thread_write

2.10.6 死循环用于获取来自驱动的消息

 2.10.7 getAndExecuteCommand

2.10.8 talkWithDriver

2.10.9 binder_ioctl

2.10.10 binder_thread_read


1.简介

此篇我们便从源码的角度来看注册服务的过程,此篇以MediaPlayerService注册进入到servicemanager中为例子。

1.1 时序图

 (为了保证流程的完整性,较为模糊,可保存到本地,放大仔细观看)

1.1流程介绍

此处举例为MediaPlayerService注册进入到servicemanager中。

第一步:首先mediaservice会获取ProcessState实例对象,此ProcessState对象会打开binder驱动,驱动中会创建meidaservice服务对应的porc进程信息对象。同时meidaservice作为服务,会申请一块内存,通过mmap的进行内存映射,将meidaservice服务的用户空间映射到驱动中,从而会减少数据的拷贝。

第二步:获取BpServiceManager对象,即通过handle0获取ServiceManager的binder代理对象,用于和servicemanager对象通信。

第三步:初始化MediaPlayerService服务,内部首先会创建MediaPlayerService对象,此对象是一个BnMediaPlayerService(binder实体对象),然后调用BpServiceManager代理对象的addservice将此服务注册到中。

第四步:在BpServiceManager代理对象的addservice函数中,会构建Parcel对象,并将meidapalyservice服务的binder实体对象,服务名称,等信息写入Parcel对象中。然后调用bpbinder的通信接口transact函数。

第五步:然后transact函数会将数据再次进行封装,然后调用ioctl函数向驱动发送请求add服务的消息,此时进入内核态。

第六步:ioctl函数对应驱动的binder_ioctl函数,驱动会先将mediapalyservice服务的用户空间的数据拷贝到内核空间,并取出add的标志,处理add服务的消息。然后会走到binder_transaction中。

第七步:binder_transaction函数中首先会找到目标的进程的proc对象和目标进程的线程,然后取出meidapalyservice服务塞入Parcel对象中binder实体对象,并在驱动中创建meidapalyservice服务对应的bbbinder实体,并将此bbbinder实体添加到meidiaplayservice服务的比红黑树中,然后在目标进程(SM进程)的binder_proc中创建对应的binder_ref红黑树节点,将mediaservice的引用添加到SM进程的引用的红黑树中。然后在sm之前映射的进程中分配一块内核buffer空间,将add添加服务的消息数据放入sm进程的缓冲区中。然后将此add服务的消息添加到sm进程的todo队列中。然后唤醒目标进程(SM进程的队列)队列。同时也会生成一个完成的事务消息放入到meidaplayservice的进程队列中。

第八步:meidaplayservice服务作为客户端,会陷入休眠,等待服务端返回reply消息。

第九步:我们知道SM启动后,在向驱动成为binder的管理者后,就会循环从驱动中读取数据,由于驱动无数据,所以会陷入休眠状态等待在todo队列上。在第七步我们已经唤醒了sm进程。此时sm进程会从队列中读取到消息。然后处理add服务的消息。


第十步:sm将meidaplayservice服务的hanle和服务名称保存到svclist,完成注册。然后sm向驱动发送回复消息。同样也是通过binder_transaction函数去找到meidaplayservice服务的porc对象,并向meida服务的todo队列中插入回复消息,并唤醒mediaservice服务。并且往自己的队列中插入未完成事务。然后sm处理完后,再次陷入等待,等待来自驱动的消息。

第十一步:meidaplayservice服务处理reply消息。

第十二步:mediaplayservice服务创建一个新的线程加入binder线程池中,并向驱动发送要进入线程循环的指令,
然后此新线程进入死循环,用于读取来自驱动的消息,如果没有消息,则会休眠阻塞。

第十三步:mediaplayservice服务主线程也加入binder线程池中,也会进入死循环,用于读取来自驱动的消息,如果没有消息,则会休眠阻塞。用于不让主服务退出。


2 meidaservice的启动——源码分析

此例子以meidaservice注册到servcicemanager为例。

//此函数的作用是:驱动多媒体服务,并注册到serivcemanager中
int main(int argc __unused, char **argv __unused)
{
	signal(SIGPIPE, SIG_IGN);
 
    sp<ProcessState> proc(ProcessState::self());     //获得ProcessState实例对象
    sp<IServiceManager> sm(defaultServiceManager()); //获取BpServiceManager对象
    ALOGI("ServiceManager: %p", sm.get());
    AIcu_initializeIcuOrDie();
    MediaPlayerService::instantiate();   //注册多媒体服务
    ResourceManagerService::instantiate();
    registerExtensions();
    ProcessState::self()->startThreadPool();    //启动Binder线程池
    IPCThreadState::self()->joinThreadPool();   //当前线程加入到线程池

}

 注意:startThreadPool,本质上是创建了一个新的线程,并向binder驱动注册为主线程。然后一直循环。joinThreadPool是将当前线程加入到binder主线程。此时有两个binder主线程。

2.1 获取ProcessState对象

2.1.1 ProcessState::ProcessState

1.此处是单例模式。在一个进程中,只会创建一次。ProcessState作为进程状态的记录器,主要用来打开Binder驱动获取句柄,mmap申请一块(1M-8K)的内存空间,设置Binder线程最大个数。

sp<ProcessState> ProcessState::self()
{
    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != nullptr) {
        return gProcess;
    }
    gProcess = new ProcessState(kDefaultDriver);//sp<ProcessState> gProcess;
	//根据VNDK的宏定义来决定当前进程使用"/dev/binder" 还是"/dev/vndbinder"kDefaultDriver此时值是= "/dev/binder";
    return gProcess;
}

 2.1.2 ProcessState::init

1.创建了ProcessState对象。此对象用于打开binder驱动,并完成内存的映射,将用户空间的一块内存映射到内核空间,映射关系建立后,用户对这块内存区域的修改可直接反映到内核空间,反之内核空间对其修改也能反映到用户空间。

ProcessState::ProcessState(const char *driver)
    : mDriverName(String8(driver))//driver是/dev/binder
    , mDriverFD(open_driver(driver))//打开binder驱动返回的fd文件描述符
    , mVMStart(MAP_FAILED)
    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)//保护线程数量的锁,pthread_mutex_t mThreadCountLock;
    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)//条件变量。pthread_cond_t mThreadCountDecrement;
    , mExecutingThreadsCount(0)//当前执行命令的binder线程数
    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)//最大线程数15
    , mStarvationStartTimeMs(0)//线程池被清空的时间
    , mManagesContexts(false)
    , mBinderContextCheckFunc(nullptr)
    , mBinderContextUserData(nullptr)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
    , mCallRestriction(CallRestriction::NONE)//无呼叫限制
{
    if (mDriverFD >= 0) {
		//mmap binder,内存映射。128k的内存地址,申请128k的内存地址用来接收事务,对应binder驱动的binder_mmap函数。
		//启动分析中有,故不再分析
        mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
		/**
        if (mVMStart == MAP_FAILED) {//如果内存映射失败
            // *sigh*
            ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str());
            close(mDriverFD);
            mDriverFD = -1;
            mDriverName.clear();
        }
		*/
    }

    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened.  Terminating.");
}

2.1.3 open_driver

主要作用为:

1.open打开binder驱动,通过系统调用_open,最后会调用到binder_open,陷入内核态。获取fd。

2.通过ioctl,获取binder驱动版本号,用vers保存。然后对比版本号是否一致。

3.通过ioctl设置binder驱动最大线程数。

static int open_driver(const char *driver)
{
    int fd = open(driver, O_RDWR | O_CLOEXEC);//driver值是/dev/binder,打开binder驱动,陷入内核,此
	//open对应_open,然后对应binder驱动层的binder_open,binder_open中主要是为/dev/binder这个驱动设备节点,创建
	//对应的proc对象,然后通过fd返回。这样就可以和驱动进行通信。
    if (fd >= 0) {//大于0代表打开成功
        int vers = 0;
        status_t result = ioctl(fd, BINDER_VERSION, &vers);//获取binder驱动的版本信息,用vers保存
		/**正常不执行
        if (result == -1) {//如果获取信息失败,则关闭binder驱动
            ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }
		*/
		
		/**
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {//如果获取的版本信息不匹配,则关闭binder驱动
          ALOGE("Binder driver protocol(%d) does not match user space protocol(%d)! ioctl() return value: %d",
                vers, BINDER_CURRENT_PROTOCOL_VERSION, result);
            close(fd);
            fd = -1;
        }
		*/
		
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;//设置最大线程数15
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);//设置binder驱动最大线程数
		
		/**
        if (result == -1) {
            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
		*/
        uint32_t enable = DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION;//DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION值是1,表示默认启用更新垃圾检测
        result = ioctl(fd, BINDER_ENABLE_ONEWAY_SPAM_DETECTION, &enable);//设置到binder驱动中
		/**
        if (result == -1) {
            ALOGD("Binder ioctl to enable oneway spam detection failed: %s", strerror(errno));
        }
		*/
    } 
	/**
	else {
        ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno));
    }
	*/
    return fd;
}

2.1.4 binder_open

1.为当前sm服务进程分配保存binder_proc的对象。binder_proc结构体保存的是sm服务的进程的信息。

2.对sm服务的binder_proc的对象进行初始化,如初始化todo队列,设置进程优先级等。

3.将sm服务的binder_proc添加到binder_procs队列中,驱动有一个全局的binder_procss的列表,用于存储所有进程的binder_proc对象。

//主要作用:binder驱动为用户进程创建了用户进程自己的binder——proc实体
static int binder_open(struct inode *nodp, struct file *filp)
{
	struct binder_proc *proc;//proc表示驱动中binder进程


	proc = kzalloc(sizeof(*proc), GFP_KERNEL);//为binder_proc结构体在分配kernel内存空间
	if (proc == NULL)
		return -ENOMEM;
	//下面是对binder_proc进行初始化,binder_proc用于管理数据的记录体
	get_task_struct(current);
	proc->tsk = current;//current代表当前线程。当前线程此时是servicemanager的主线程,当前线程的task保存在binder进程的tsk
	INIT_LIST_HEAD(&proc->todo);//初始化to列表
	init_waitqueue_head(&proc->wait);//初始化wait列表
	proc->default_priority = task_nice(current);//当前进程的nice值转化为进程优先级

	binder_lock(__func__);

	binder_stats_created(BINDER_STAT_PROC);//BINDER_PROC对象创建+1
	hlist_add_head(&proc->proc_node, &binder_procs);//将proc_node节点添加到binder_procs为表头的队列
	proc->pid = current->group_leader->pid;
	INIT_LIST_HEAD(&proc->delivered_death);
	filp->private_data = proc;//将 proc 保存到filp中,这样下次可以通过filp找到此proc

	binder_unlock(__func__);

	if (binder_debugfs_dir_entry_proc) {
		char strbuf[11];
		snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
		proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
			binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);
	}

	return 0;
}
//进程相关的参数
struct binder_proc {
	struct hlist_node proc_node;//hlist_node表示哈希表中的一个节点。哈希表中的一个节点,用于标记该进程
	struct rb_root threads;// rb_root表示红黑树。Binder线程池每一个Binder进程都有一个线程池,
	//由Binder驱动来维护,Binder线程池中所有线程由一个红黑树来组织,RB树以线程ID为关键字  */
        //上述红黑树的根节点
	struct rb_root nodes;// 一系列Binder实体对象(binder_node)和Binder引用对象(binder_ref) */
    //在用户空间:运行在Server端称为Binder本地对象,运行在Client端称为Binder代理对象*/
    //在内核空间:Binder实体对象用来描述Binder本地对象,Binder引用对象来描述Binder代理对象 */
    //Binder实体对象列表(RB树),关键字 ptr
	struct rb_root refs_by_desc;//记录binder引用, 便于快速查找,binder_ref红黑树的根节点(以handle为key),它是Client在Binder驱动中的体现。
	//Binder引用对象,关键字  desc
	struct rb_root refs_by_node;//记录binder引用, 便于快速查找,binder_ref红黑树的根节点(以ptr为key),它是Client在Binder驱动中的体现,
	//Binder引用对象,关键字  node
	struct list_head waiting_threads; 
	int pid;//进程组pid
	struct task_struct *tsk;//任务控制模块
	const struct cred *cred;
	struct hlist_node deferred_work_node;
	int deferred_work;
	bool is_dead;

	struct list_head todo;//待处理队列,进程每接收到一个通信请求,Binder将其封装成一个工作项,保存在待处理队列to_do中  */

	struct binder_stats stats;
	struct list_head delivered_death;
	int max_threads;// Binder驱动程序最多可以请求进程注册线程的最大数量,
	//进程可以调用ioctl注册线程到Binder驱动程序中,当线程池中没有足够空闲线程来处理事务时,Binder驱动可以主动要求进程注册更多的线程到Binder线程池中 */
    // Binder驱动程序最多可以请求进程注册线程的最大数量
	int requested_threads;
	int requested_threads_started;
	int tmp_ref;
	long default_priority;
	struct dentry *debugfs_entry;
	struct binder_alloc alloc;
	struct binder_context *context;//存储binder_node和binder_context_mgr_uid以及name
	spinlock_t inner_lock;
	spinlock_t outer_lock;
	struct dentry *binderfs_entry;
};

2.1.5 获取版本号binder_ioctl

主要作用为:

1.此时用于获取版本号。

//获取binder版本号分析
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;//binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//binder_stop_on_user_error值是0,
	//binder_stop_on_user_error < 2为假时,休眠,故此处不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取当前sm的proc实体对象的binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_VERSION://获取版本号
		if (size != sizeof(struct binder_version)) {
			ret = -EINVAL;
			goto err;
		}
		if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &((struct binder_version *)ubuf)->protocol_version)) {
			//将驱动程序的BINDER_CURRENT_PROTOCOL_VERSION变量地址,拷贝sizeof(*ptr)的ubuf用户空间地址。
			//分析,此处是有一次拷贝的。
			ret = -EINVAL;
			goto err;
		}
		break;
	default:
		ret = -EINVAL;
		goto err;
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.1.6 设置binder最大线程数binder_ioctl

//设置binder最大线程数分析
//ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);maxThreads值为15
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;//binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//ubuf是15的地址

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取当前sm的proc实体的binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_SET_MAX_THREADS://设置binder最大线程数量
		if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {//拷贝用户空间的地址到内核空间
			ret = -EINVAL;
			goto err;
		}
		break;
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.1.7 binder_mmap

主要作用为:

1.get_vm_area分配一个连续的内核虚拟空间,与用户进程虚拟空间大小一致。

2.binder_update_page_range,分配物理页面,同时映射到内核空间和用户进程空间。

即,此时servermanager服务在其内部分配了一块地址映射进入了内核空间。减少了拷贝。

//filp对应fp,vma代表用户地址空间
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)//vma描述了应用程序使用的用户虚拟空间
{
	int ret;
	struct vm_struct *area;//内核虚拟空间
	struct binder_proc *proc = filp->private_data;//取出sm进程对应的binder_proc对象
	const char *failure_string;
	struct binder_buffer *buffer;//用于binder传输数据的缓冲区

	if ((vma->vm_end - vma->vm_start) > SZ_4M)//保证映射内存大小不超过4M
		vma->vm_end = vma->vm_start + SZ_4M;



	if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {//判断是否禁止mmap
		ret = -EPERM;
		failure_string = "bad vm_flags";
		goto err_bad_arg;
	}
	vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;

	mutex_lock(&binder_mmap_lock);
	if (proc->buffer) {//如果buffer不为空,则代表已经mmap过了
		ret = -EBUSY;
		failure_string = "already mapped";
		goto err_already_mapped;
	}

	area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);//分配一个连续的内核虚拟空间,与用户进程虚拟空间大小一致
	if (area == NULL) {
		ret = -ENOMEM;
		failure_string = "get_vm_area";
		goto err_get_vm_area_failed;
	}
	proc->buffer = area->addr;//binder_proc对象的buffer指向内核虚拟空间的地址
	proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;//计算偏移量,地址偏移量=用户空间地址-内核空间地址
	mutex_unlock(&binder_mmap_lock);

	proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);//分配
	//分配物理页的指针数组,大小等于用户虚拟内存/4K,此数组用于指示binder申请的物理页的状态
	if (proc->pages == NULL) {
		ret = -ENOMEM;
		failure_string = "alloc page array";
		goto err_alloc_pages_failed;
	}
	proc->buffer_size = vma->vm_end - vma->vm_start;//计算大小

	vma->vm_ops = &binder_vm_ops;
	vma->vm_private_data = proc;

	if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {//分配物理页面,同时映射到内核空间和进程空间,
	//目前只分配1个page的物理页
		ret = -ENOMEM;
		failure_string = "alloc small buf";
		goto err_alloc_small_buf_failed;
	}
	buffer = proc->buffer;//binder_buffer对象,指向proc的buffer地址,proc的buffer地址又指向内核虚拟空间的地址
	INIT_LIST_HEAD(&proc->buffers);
	list_add(&buffer->entry, &proc->buffers);//将内核虚拟空间的地址加入到所属进程的buffer队列
	buffer->free = 1;
	binder_insert_free_buffer(proc, buffer);//将空闲的buffer放入proc->free_buffer中
	proc->free_async_space = proc->buffer_size / 2;
	barrier();
	proc->files = get_files_struct(proc->tsk);
	proc->vma = vma;//binder_proc对象保存了用户空间的地址
	proc->vma_vm_mm = vma->vm_mm;

	/*printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p\n",
		 proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/
	return 0;
错误跳转
err_alloc_small_buf_failed:
	kfree(proc->pages);
	proc->pages = NULL;
err_alloc_pages_failed:
	mutex_lock(&binder_mmap_lock);
	vfree(proc->buffer);
	proc->buffer = NULL;
err_get_vm_area_failed:
err_already_mapped:
	mutex_unlock(&binder_mmap_lock);
err_bad_arg:
	printk(KERN_ERR "binder_mmap: %d %lx-%lx %s failed %d\n",
	       proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
	return ret;
}

2.1.8 binder_update_page_range

主要作用为:

1.分配物理内存空间。

2.将物理空间映射到内核空间。

3.将物理空间映射到用户进程空间。

当然binder_update_page_range 既可以分配物理页面,也可以释放物理页面。

//binder_update_page_range 主要完成工作:分配物理内存空间,将物理空间映射到内核空间,将物理空间映射到进程空间。
//当然binder_update_page_range 既可以分配物理页面,也可以释放物理页面
static int binder_update_page_range(struct binder_proc *proc, int allocate,
				    void *start, void *end,
				    struct vm_area_struct *vma)
					//参数说明:
					//proc对应的进程的proc对象
					//allocate表示是申请还是释放
					//start,表示内核虚拟空间起始地址
					//end,内核虚拟空间末尾地址
					//用户空间地址
{
	void *page_addr;
	unsigned long user_page_addr;
	struct vm_struct tmp_area;
	struct page **page;
	struct mm_struct *mm;


	if (end <= start)
		return 0;

	trace_binder_update_page_range(proc, allocate, start, end);

	if (vma)
		mm = NULL;
	else
		mm = get_task_mm(proc->tsk);

	if (mm) {
		down_write(&mm->mmap_sem);
		vma = proc->vma;
		if (vma && mm != proc->vma_vm_mm) {
			pr_err("binder: %d: vma mm and task mm mismatch\n",
				proc->pid);
			vma = NULL;
		}
	}

	if (allocate == 0)
		goto free_range;

	if (vma == NULL) {
		printk(KERN_ERR "binder: %d: binder_alloc_buf failed to "
		       "map pages in userspace, no vma\n", proc->pid);
		goto err_no_vma;
	}
	//前面都在判断参数是否合法
	//
	for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
		int ret;
		struct page **page_array_ptr;
		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];

		BUG_ON(*page);
		//分配物理内存
		*page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
		if (*page == NULL) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "for page at %p\n", proc->pid, page_addr);
			goto err_alloc_page_failed;
		}
		tmp_area.addr = page_addr;//tmp_area是内核地址,此时指向物理地址
		tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;
		page_array_ptr = page;
		ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);//映射物理页地址到内核地址tmp_area
		if (ret) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "to map page at %p in kernel\n",
			       proc->pid, page_addr);
			goto err_map_kernel_failed;
		}
		user_page_addr =
			(uintptr_t)page_addr + proc->user_buffer_offset;
		ret = vm_insert_page(vma, user_page_addr, page[0]);//映射物理页地址到虚拟用户地址空间vma
		if (ret) {
			printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
			       "to map page at %lx in userspace\n",
			       proc->pid, user_page_addr);
			goto err_vm_insert_page_failed;
		}
		/* vm_insert_page does not seem to increment the refcount */
	}
	if (mm) {
		up_write(&mm->mmap_sem);
		mmput(mm);
	}
	return 0;

free_range:
	for (page_addr = end - PAGE_SIZE; page_addr >= start;
	     page_addr -= PAGE_SIZE) {
		page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
		if (vma)
			zap_page_range(vma, (uintptr_t)page_addr +
				proc->user_buffer_offset, PAGE_SIZE, NULL);
err_vm_insert_page_failed:
		unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
err_map_kernel_failed:
		__free_page(*page);
		*page = NULL;
err_alloc_page_failed:
		;
	}
err_no_vma:
	if (mm) {
		up_write(&mm->mmap_sem);
		mmput(mm);
	}
	return -ENOMEM;
}

 2.2 获取bpServiceManager对象

2.2.1 defaultServiceManager

1.此处最重要的是gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(nullptr));

因此我们需要查看getContextObject和interface_cast的作用。

可以提前说一下,最终sm是bpServiceManager对象。我们看一下是如何分析的。

sp<IServiceManager> defaultServiceManager()
{
    if (gDefaultServiceManager != nullptr) return gDefaultServiceManager;

    {
        AutoMutex _l(gDefaultServiceManagerLock);
		
        while (gDefaultServiceManager == nullptr) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
                ProcessState::self()->getContextObject(nullptr));
            if (gDefaultServiceManager == nullptr)
                sleep(1);//当ServiceManager还未准备好,等待1秒后重新获取ServiceManager对象
        }
    }

    return gDefaultServiceManager;//gDefaultServiceManager是一个bpServiceManager
}

2.2.2 ProcessState::getContextObject

1.最终获取了servicemanager的句柄对应的BpBinder对象。

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
    return getStrongProxyForHandle(0);//0就是servicemanager的句柄
}

2.2.3 ProcessState::getStrongProxyForHandle

1.创建了一个handle为0的BpBinder,即SM的bpbinder

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)//传入的是0,代表serviceManager的句柄
{
    sp<IBinder> result;

    AutoMutex _l(mLock);

    handle_entry* e = lookupHandleLocked(handle);//查找SM的handle对应的资源项。

    if (e != nullptr) {
        IBinder* b = e->binder;//此时b为空
        if (b == nullptr || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) {
				//执行一个伪事务以确保上下文管理器已注册。通过ping操作测试binder是否准备就绪
                Parcel data;
                status_t status = IPCThreadState::self()->transact(
                        0, IBinder::PING_TRANSACTION, data, nullptr, 0);
                if (status == DEAD_OBJECT)
                   return nullptr;
            }

            b = BpBinder::create(handle);//创建了一个handle为0的BpBinder,即SM的bpbinder
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;//返回一个handle为0的BpBinder
}

2.2.4 ProcessState::lookupHandleLocked

ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)//此时值是0,0代表sm的binder代理对象的句柄
{
    const size_t N=mHandleToObject.size();//第一次N=0
    if (N <= (size_t)handle) {
        handle_entry e;
        e.binder = nullptr;
        e.refs = nullptr;
        status_t err = mHandleToObject.insertAt(e, N, handle+1-N);//mHandleToObject是保存handle_entry的容器
        if (err < NO_ERROR) return nullptr;
    }
    return &mHandleToObject.editItemAt(handle);//返回了一个空的handle_entry
}

2.2.5 BpBinder::create

1.创建了SM对应的BpBinder对象。

BpBinder* BpBinder::create(int32_t handle) {
    int32_t trackedUid = -1;
	//sCountByUidEnabled默认是false
	/**
    if (sCountByUidEnabled) {
        trackedUid = IPCThreadState::self()->getCallingUid();
        AutoMutex _l(sTrackingLock);
        uint32_t trackedValue = sTrackingMap[trackedUid];
        if (CC_UNLIKELY(trackedValue & LIMIT_REACHED_MASK)) {
            if (sBinderProxyThrottleCreate) {
                return nullptr;
            }
        } else {
            if ((trackedValue & COUNTING_VALUE_MASK) >= sBinderProxyCountHighWatermark) {
                ALOGE("Too many binder proxy objects sent to uid %d from uid %d (%d proxies held)",
                      getuid(), trackedUid, trackedValue);
                sTrackingMap[trackedUid] |= LIMIT_REACHED_MASK;
                if (sLimitCallback) sLimitCallback(trackedUid);
                if (sBinderProxyThrottleCreate) {
                    ALOGI("Throttling binder proxy creates from uid %d in uid %d until binder proxy"
                          " count drops below %d",
                          trackedUid, getuid(), sBinderProxyCountLowWatermark);
                    return nullptr;
                }
            }
        }
        sTrackingMap[trackedUid]++;
    }
	*/
    return new BpBinder(handle, trackedUid);//创建了BpBinder对象
}

2.2.6 BpBinder::BpBinder

BpBinder::BpBinder(int32_t handle, int32_t trackedUid)//此时handle=0,trackedUid是-1
    : mHandle(handle)//是0,代表SM的Bpbinder的句柄
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(nullptr)
    , mTrackedUid(trackedUid)
{
    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);//扩展生命周期为弱引用管理
    IPCThreadState::self()->incWeakHandle(handle, this);//增加BPbinder的弱引用计数
}

2.2.7 IPCThreadState::self

IPCThreadState* IPCThreadState::self()
{
	/**不执行
    if (gHaveTLS.load(std::memory_order_acquire)) {//定义时是false。static std::atomic<bool> gHaveTLS(false);
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        return new IPCThreadState;
    }


	
    // Racey, heuristic test for simultaneous shutdown.
    if (gShutdown.load(std::memory_order_relaxed)) {//定义是false。static std::atomic<bool> gShutdown = false;
        ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
        return nullptr;
    }
	*/

    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS.load(std::memory_order_relaxed)) {
        int key_create_value = pthread_key_create(&gTLS, threadDestructor);//分配用于表示进程中线程特定数据的键,键对进程中的所有线程来说是全局的.
		//pthread_key_create第一个参数为指向一个键值的指针,第二个参数指明了一个destructor函数,
		//如果这个参数不为空,那么当每个线程结束时,系统将调用这个函数来释放绑定在这个键上的内存块。
		//key_create_value成功返回0,其他是失败
		/**
        if (key_create_value != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
                    strerror(key_create_value));
            return nullptr;
        }
		*/
        gHaveTLS.store(true, std::memory_order_release);//将gHaveTLS值设置为true。static std::atomic<bool> gHaveTLS(false);
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}

2.2.8 IPCThreadState::IPCThreadState

IPCThreadState::IPCThreadState()
      : mProcess(ProcessState::self()),//保存当前的ProcessState对象
        mServingStackPointer(nullptr),
        mWorkSource(kUnsetWorkSource),
        mPropagateWorkSource(false),
        mIsLooper(false),
        mIsFlushing(false),
        mStrictModePolicy(0),
        mLastTransactionBinderFlags(0),
        mCallRestriction(mProcess->mCallRestriction) {
    pthread_setspecific(gTLS, this);//将自己即IPCThreadState对象存储到gTLS中。
    clearCaller();
    mIn.setDataCapacity(256);//设置输入缓存区大小
    mOut.setDataCapacity(256);//设置输出缓存区大小
}

2.2.9 IPCThreadState::incWeakHandle

void IPCThreadState::incWeakHandle(int32_t handle, BpBinder *proxy)
{
    LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
    mOut.writeInt32(BC_INCREFS);//向驱动请求增加binder的引用计数
    mOut.writeInt32(handle);//此时是0,代表SM的句柄
	if (!flushIfNeeded()) {//flushIfNeeded返回值是false,因为当前线程还没有进入循环,故无法贺binder驱动通信。
    // Create a temp reference until the driver has handled this command.
    proxy->getWeakRefs()->incWeak(mProcess.get());
    mPostWriteWeakDerefs.push(proxy->getWeakRefs());
	}
}

所以gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(nullptr));

就变成了
gDefaultServiceManager = interface_cast<IServiceManager>(BpBinder(0,trackedUid));

我们接下来继续查看interface_cast的作用。

2.2.10 interface_cast

这个是一个模板类。

template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}

 此时INTERFACE的值是IServiceManager,转换后为:

inline sp<IServiceManager> interface_cast(const sp<IBinder>& obj)
{
    return IServiceManager::asInterface(obj);//此时obj是BpBinder(0,trackedUid)
	//真正返回的是BpServiceManager(BpBinder(0,trackedUid)) 对象
}

继续查看asInterface的作用。

2.2.11 asInterface

#define DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE(INTERFACE, NAME)\
    const ::android::String16 I##INTERFACE::descriptor(NAME);           \
    const ::android::String16&                                          \
            I##INTERFACE::getInterfaceDescriptor() const {              \
        return I##INTERFACE::descriptor;                                \
    }                                                                   \
    ::android::sp<I##INTERFACE> I##INTERFACE::asInterface(              \
            const ::android::sp<::android::IBinder>& obj)               \
    {                                                                   \
        ::android::sp<I##INTERFACE> intr;                               \
        if (obj != nullptr) {                                           \
            intr = static_cast<I##INTERFACE*>(                          \
                obj->queryLocalInterface(                               \
                        I##INTERFACE::descriptor).get());               \
            if (intr == nullptr) {                                      \
                intr = new Bp##INTERFACE(obj);                          \
            }                                                           \
        }                                                               \
        return intr;                                                    \
    }

替换后为:

#define DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager")                       
    const ::android::String16 IServiceManager::descriptor("android.os.IServiceManager");    
	
    ::android::sp<IServiceManager> IServiceManager::asInterface(              
            const ::android::sp<::android::IBinder>& obj)               
    {                                                                   
        ::android::sp<IServiceManager> intr;                               
        if (obj != nullptr) {//此时obj是BpBinder(0,trackedUid)                                    
            intr = static_cast<IServiceManager*>(                          
                obj->queryLocalInterface(IServiceManager::descriptor).get());//查询"android.os.IServiceManager"对应的本地服务      
            if (intr == nullptr) {                                      
                intr = new BpServiceManager(obj);                          
            }                                                           
        }                                                               
        return intr; //返回的是BpServiceManager(BpBinder(0,trackedUid))     		
    }

2.2.12 queryLocalInterface

//此时obj是BpBinder对象,查看其queryLocalInterface方法。
sp<IInterface>  IBinder::queryLocalInterface(const String16& /*descriptor*/)
{
    return nullptr;//返回为空
}

所以此时

gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(nullptr));

就变成了
gDefaultServiceManager = interface_cast<IServiceManager>(BpBinder(0,trackedUid));

等价于
gDefaultServiceManager =new BpServiceManager(new BpBinder(0,trackedUid));

2.2.13 BpServiceManager::BpServiceManager

explicit BpServiceManager(const sp<IBinder>& impl)//此时impl是BpBinder(0,trackedUid)
        : BpInterface<IServiceManager>(impl)
{
}

2.2.14 BpInterface::BpInterface

class BpInterface : public INTERFACE, public BpRefBase
inline BpInterface<IServiceManager>::BpInterface(const sp<IBinder>& remote)//remote是BpBinder(0,trackedUid)
    : BpRefBase(remote)
{
}



替换后,等价于
class BpInterface : public IServiceManager, public BpRefBase
inline BpInterface<IServiceManager>::BpInterface(const sp<IBinder>& remote)//remote是BpBinder(0,trackedUid)
    : BpRefBase(remote)
{
}

2.2.15 BpRefBase::BpRefBase

BpRefBase::BpRefBase(const sp<IBinder>& o)//o是BpBinder(0,trackedUid)
    : mRemote(o.get()), mRefs(nullptr), mState(0)//mRemote就是BpBinder(0,trackedUid)
{
    extendObjectLifetime(OBJECT_LIFETIME_WEAK);//扩展对象生命周期为弱周期管理

    if (mRemote) {
        mRemote->incStrong(this);           //增加对象引用计数
        mRefs = mRemote->createWeak(this);  //持有一个mRemote的弱引用对象
    }
}

2.3 注册多媒体服务到servicemanager

2.3.1 MediaPlayerService::instantiate

主要作用为:

1.MediaPlayerService调用Bpservicemanager对象的addService接口。

void MediaPlayerService::instantiate() {
    defaultServiceManager()->addService(String16("media.player"), new MediaPlayerService());//BpServiceManager的addService方法
}

2.3.2 IServiceManager::addService

1.生成发送到驱动的数据data和请求恢复的reply。

2.将MediaPlayerService这个binder实体对象,对应的string名称,写入data。

3.通过bpbinder的transact接口将消息发送出去,并等待回复的消息。

[/frameworks/native/libs/binder/IServiceManager.cpp]
virtual status_t addService(const String16& name, const sp<IBinder>& service,
                                bool allowIsolated, int dumpsysPriority) {
		//参数:name是"media.player",service是new MediaPlayerService,也是一个BBbinder实体
		//allowIsolated是fasle,dumpsysPriority是8,默认优先级
        Parcel data, reply;//发送到驱动的data,和请求回复的reply
        data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());//写入RPC头信息"android.os.IServiceManager"
        data.writeString16(name);//写入"media.player"
        data.writeStrongBinder(service);//把一个binder实体“打扁”并写入parcel, 服务的实体对象:new MediaPlayerService
        data.writeInt32(allowIsolated ? 1 : 0);//写入0
        data.writeInt32(dumpsysPriority);//写入8
        status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);//remote() 函数{ return mRemote; }
		//故mRemote是SM对应的BpBinder(0,-1)的transact方法。ADD_SERVICE_TRANSACTION是3
        return err == NO_ERROR ? reply.readExceptionCode() : err;
    }
status_t Parcel::writeInterfaceToken(const String16& interface)
{
    const IPCThreadState* threadState = IPCThreadState::self();//获取IPCThreadState对象
    writeInt32(threadState->getStrictModePolicy() | STRICT_MODE_PENALTY_GATHER);
    updateWorkSourceRequestHeaderPosition();
    writeInt32(threadState->shouldPropagateWorkSource() ?
            threadState->getCallingWorkSourceUid() : IPCThreadState::kUnsetWorkSource);
    // currently the interface identification token is just its name as a string
    return writeString16(interface);//interface是"android.os.IServiceManager"
}

 2.3.3 writeStrongBinder

status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);//ProcessState是进程状态,val是new MediaPlayerService,BBinder实体。
	//this是Parcel对象data
}

2.3.4  flatten_binder

1.parcel对象内部会有一个buffer,记录着parcel中所有扁平化的数据,有些扁平数据是普通数据,而另一些扁平数据则记录着binder对象。所以parcel中会构造另一个mObjects数组,专门记录那些binder扁平数据所在的位置,

status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    if (IPCThreadState::self()->backgroundSchedulingDisabled()) {
        /* minimum priority for all nodes is nice 0 */
        obj.flags = FLAT_BINDER_FLAG_ACCEPTS_FDS;
    } else {
        /* minimum priority for all nodes is MAX_NICE(19) */
        obj.flags = 0x13 | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    }

    if (binder != nullptr) {
        BBinder *local = binder->localBinder();//如果是BBinder则返回BBinder,如果是Bpbinder则返回空
		/**
        if (!local) {//如果是bpbinder,代理
            BpBinder *proxy = binder->remoteBinder();//返回Bpbinder
            if (proxy == nullptr) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;//记录此Binder代理的句柄
            obj.hdr.type = BINDER_TYPE_HANDLE;//设置type为binder句柄
            obj.binder = 0; /* Don't pass uninitialized stack data to a remote process 
            obj.handle = handle;//将此binder代理的句柄,放入flat_binder_object中。
            obj.cookie = 0;
        }
		*/
		else {//是binder实体
            if (local->isRequestingSid()) {
                obj.flags |= FLAT_BINDER_FLAG_TXN_SECURITY_CTX;//安全上下文
            }
            obj.hdr.type = BINDER_TYPE_BINDER;//设置type为binder实体
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());//存储BBbinder
            obj.cookie = reinterpret_cast<uintptr_t>(local);//cookie记录Binder实体的指针
        }
    }
	/**
	else {
        obj.hdr.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
    }
	*/

    return finish_flatten_binder(binder, obj, out);//参数binder是new MediaPlayerService的BBbinder实体,obj是扁平的flat_binder_object,out是Parcel对象。
	//finish_flatten_binder()函数。这个函数内部会记录下刚刚被扁平化的flat_binder_object在parcel中的位置。
	//说得更详细点儿就是,parcel对象内部会有一个buffer,记录着parcel中所有扁平化的数据,
	//有些扁平数据是普通数据,而另一些扁平数据则记录着binder对象。
	//所以parcel中会构造另一个mObjects数组,专门记录那些binder扁平数据所在的位置,
}
BBinder* BBinder::localBinder()//如果是BBinder则返回BBinder,
{
    return this;
}

BBinder* IBinder::localBinder()//如果是Bpbinder则返回空
{
    return nullptr;
}

BpBinder* BpBinder::remoteBinder()//返回Bpbinder
{
    return this;
}

2.3.5 BpBinder::transact

//transact(ADD_SERVICE_TRANSACTION, data, &reply);
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
	//参数分析:
	//code是3,也是ADD_SERVICE_TRANSACTION
	//data中包含要注册的media服务的名字和media服务的BBinder实体。
	//reply为空
	//flag是0
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {//291行,BpBinder构造函数中是1,表示是否活着
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
			//mHandle是0,代表SMBPbinder的句柄,
			//code是code是3,也是ADD_SERVICE_TRANSACTION
			//data是data中包含要注册的media服务的名字和media服务的BBinder实体。
			//reply为空
			//flag是0
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

2.3.6 IPCThreadState::transact

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
								//mHandle是0,代表SMBPbinder的句柄,
								//code是code是3,也是ADD_SERVICE_TRANSACTION
								//data是data中包含要注册的media服务的名字和media服务的BBinder实体。
								//reply为空
								//flag是0
{
    status_t err;

    flags |= TF_ACCEPT_FDS;//TF_ACCEPT_FDS表示允许使用文件描述符进行答复


    LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
        (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");//是否异步回复,此处应该是阻塞的
    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);//将消息打包成结构体写入mout中
	/**
    if (err != NO_ERROR) {//如果出错
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
	*/

    if ((flags & TF_ONE_WAY) == 0) {//如果是同步的。
        if (UNLIKELY(mCallRestriction != ProcessState::CallRestriction::NONE)) {
            if (mCallRestriction == ProcessState::CallRestriction::ERROR_IF_NOT_ONEWAY) {
                ALOGE("Process making non-oneway call but is restricted.");
                CallStack::logStack("non-oneway call", CallStack::getCurrent(10).get(),
                    ANDROID_LOG_ERROR);
            } else /* FATAL_IF_NOT_ONEWAY */ {
                LOG_ALWAYS_FATAL("Process may not make oneway calls.");
            }
        }

        if (reply) {//reply存在,则waitForResponse
            err = waitForResponse(reply);
        } else {//如果reply不存在,则构造一个出来
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }

    } else {
        err = waitForResponse(nullptr, nullptr);
    }

    return err;
}

2.3.7 IPCThreadState::writeTransactionData

1.再对数据封装填充到binder_transaction_data结构体中。

2.将要发送给驱动的数据写入mout中。

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
	//参数分析:
	//cmd是BC_TRANSACTION,代表请求运输
	//binderFlags是允许使用文件描述符进行答复,值是0X10
	//mHandle是0,代表SM 的BPbinder的句柄,
	//code是code是3,也是ADD_SERVICE_TRANSACTION
	//data是data中包含要注册的media服务的名字和media服务的BBinder实体。
	//statusBuffer是空
{
    binder_transaction_data tr;//此结构体可查看驱动结构体全代码分析

    tr.target.ptr = 0; //不用ptr,因为此时要发送的是SM服务的handle句柄
    tr.target.handle = handle;//SM服务的handle句柄0
    tr.code = code;//code是3,也是ADD_SERVICE_TRANSACTION
    tr.flags = binderFlags;//允许使用文件描述符进行答复,值是0X10
    tr.cookie = 0;//实体的cookie才有用
    tr.sender_pid = 0;//发送方pid
    tr.sender_euid = 0;//发送方uid

    const status_t err = data.errorCheck();//检查数据是否有错
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();//设置数据的大小
        tr.data.ptr.buffer = data.ipcData();//指向数据的首地址
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);//ipcObjectsCount函数返回的是mObjectsSize,而mObjects
		//存储的是多个扁平的binder对象的位置。是offsets_size的偏移大小
        tr.data.ptr.offsets = data.ipcObjects();//指向mObjects数组的首地址。mObjects里面记录了所有binder实体或引用在buffer中的位置。
    }
	/**
	else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }
	*/

    mOut.writeInt32(cmd);//mOut中写入BC_TRANSACTION,注意309行,此时还有一个BC_INCREFS的命令
    mOut.write(&tr, sizeof(tr));//写入tr结构体

    return NO_ERROR;
}

2.3.8 IPCThreadState::waitForResponse向驱动发送请求add服务的消息

//第一次循环时,向驱动发送请求add服务的消息,mout中有值
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)//acquireResult默认是nullptr
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;//第一次循环时,向驱动发送请求add消息,mout中有值
		
		//后面则是对reply回复消息的处理
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        cmd = (uint32_t)mIn.readInt32();

        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;

        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(nullptr,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(nullptr,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

2.3.9 IPCThreadState::talkWithDriver

status_t IPCThreadState::talkWithDriver(bool doReceive)//doReceive默认是true
{
	/**
    if (mProcess->mDriverFD < 0) {
        return -EBADF;
    }
	*/

    binder_write_read bwr;
	//struct binder_write_read {
	//binder_size_t		write_size;//要写入的字节数,write_buffer的总字节数
	//binder_size_t		write_consumed;//驱动程序占用的字节数,write_buffer已消费的字节数
	//binder_uintptr_t	write_buffer;//写缓冲数据的指针
	//binder_size_t		read_size;//要读的字节数,read_buffer的总字节数
	//binder_size_t		read_consumed;//驱动程序占用的字节数,read_buffer已消费的字节数
	//binder_uintptr_t	read_buffer;//读缓存数据的指针
	//};

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//此时needRead是ture
	//因为mIn.dataPosition返回值和mIn.dataSize相等


	//当我们正在从min中读取数据,或者调用方打算读取下一条数据(doReceive为true时),我们不会写入任何内容。
	//此时outAvail值等于mOut.dataSize()
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;//write_size是mOut.dataSize()
    bwr.write_buffer = (uintptr_t)mOut.data();

	
    if (doReceive && needRead) {//当我们需要从驱动中读的时候。
        bwr.read_size = mIn.dataCapacity();//设置大小为256字节
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
	/*
	else {//当不读的时候,设置读的大小和buffer为0
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/
	

    // 如果读缓冲区和写缓冲区都为0,代表无事可做,立即返回,此时write_size中有数据。
	/**
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
	*/

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//向binder驱动区写,因为此时是add服务,所以mout有数据
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
		/**
        if (mProcess->mDriverFD < 0) {
            err = -EBADF;
        }
		*/
    } while (err == -EINTR);


    if (err >= NO_ERROR) {//代表驱动收到了消息
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())//如果驱动消费的数据大小小于mout的大小,则说明驱动没有消费mout数据
                LOG_ALWAYS_FATAL("Driver did not consume write buffer. "
                                 "err: %s consumed: %zu of %zu",
                                 statusToString(err).c_str(),
                                 (size_t)bwr.write_consumed,
                                 mOut.dataSize());
            else {//代表mout被正确消费
                mOut.setDataSize(0);//重置数据大小为0
                processPostWriteDerefs();//主要是将写的引用计数减少1,释放
            }
        }
		/**
        if (bwr.read_consumed > 0) {//如果驱动读的数据大小大于0
            mIn.setDataSize(bwr.read_consumed);//设置mIn的大小
            mIn.setDataPosition(0);//设置min数据起始位置
        }
		*/
        return NO_ERROR;
    }

    ///return err;
}

2.3.10 binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出media服务进程对应的porc对象
	struct binder_thread *thread;//media服务进程的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回0,不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {//如果命令是BINDER_WRITE_READ
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {//判断大小
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//从用户空间拷贝数据到内核空间
			ret = -EFAULT;
			goto err;
		}

		if (bwr.write_size > 0) {//此时写大于0
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			trace_binder_write_done(ret);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {//此时读的大小大于0,代表我们需要从驱动中读取消息
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			//此函数下面会有大量展开参数分析:
			//proc,meidaservice服务的proc
			//bwr.read_buffer,read_buffer的地址
			//read_size>0
			//read_consumed,代表已消费的字节数0
			//最后一个参数是0
			trace_binder_read_done(ret);
			/**if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}*/
		}
		binder_debug(BINDER_DEBUG_READ_WRITE,
			     "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			     proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
			     bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.3.11 binder_thread_write(增加SM对应的句柄0的引用计数)

1.此时是存在两个消息,一个是请求增加SM对应的句柄0的引用计数,一个是meida服务请求add到servicemanager中的消息。

下面为第一次循环,用于处理请求增加SM对应的句柄0的引用计数。

//因为里面有两个数据,一个是请求增加SM对应的句柄0的引用计数,一个是meida服务请求add到servicemanager中的消息。
//此时先分析第一个循环:取出请求增加SM对应的句柄0的引用计数的消息,BC_INCREFS表示binder_ref弱引用计数加1
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
			//参数分析:
			//proc代表meida服务进程的proc对象
			//thread为此meida服务进程的binder线程
			//bwr.write_buffer,内核数据的起始地址,数据有两个一个是BC_INCREFS和其携带的数据handle,还有一个是请求add_service的消息,包含BBbinder等消息。
			//write_size,字节,数据大小
			//consumed=0,驱动程序已消费的数据大小
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;//是首地址,因为consumed是0
	void __user *end = buffer + size;//尾地址

	while (ptr < end && thread->return_error == BR_OK) {//死循环,第一次循环获取的cmd是BC_INCREFS
		if (get_user(cmd, (uint32_t __user *)ptr))//从buffer的首地址中获取命令,并赋值给cmd
			return -EFAULT;
		ptr += sizeof(uint32_t);//指针后移,跳过此命令
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {//记录信息。
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_INCREFS:	//binder_ref的弱引用计数+1
		case BC_ACQUIRE:	//binder_ref的强引用计数+1
		case BC_RELEASE:	//binder_ref的强引用计数-1
		case BC_DECREFS: {	//binder_ref的弱引用计数-1
			uint32_t target;
			struct binder_ref *ref;
			const char *debug_string;

			if (get_user(target, (uint32_t __user *)ptr))//此时ptr指向的是0这个数据。所以target等于0
				return -EFAULT;
			ptr += sizeof(uint32_t);
			if (target == 0 && binder_context_mgr_node &&
			    (cmd == BC_INCREFS || cmd == BC_ACQUIRE)) {//此时走这里,如果是SM的代理,并且sm的实体存在。
				ref = binder_get_ref_for_node(proc,
					       binder_context_mgr_node);//找到了SM的代理
				if (ref->desc != target) {
					binder_user_error("binder: %d:"
						"%d tried to acquire "
						"reference to desc 0, "
						"got %d instead\n",
						proc->pid, thread->pid,
						ref->desc);
				}
			} else
				ref = binder_get_ref(proc, target);
			if (ref == NULL) {
				binder_user_error("binder: %d:%d refcou"
					"nt change on invalid ref %d\n",
					proc->pid, thread->pid, target);
				break;
			}
			switch (cmd) {
			case BC_INCREFS:
				debug_string = "IncRefs";
				binder_inc_ref(ref, 0, NULL);//弱引用计数加1
				break;
			}
			binder_debug(BINDER_DEBUG_USER_REFS,
				     "binder: %d:%d %s ref %d desc %d s %d w %d for node %d\n",
				     proc->pid, thread->pid, debug_string, ref->debug_id,
				     ref->desc, ref->strong, ref->weak, ref->node->debug_id);
			break;
		}
		}
		*consumed = ptr - buffer;//驱动已消费的大小
	}//注意此时for循环未结束,开始第二个循环。
	return 0;
}

 2.3.12 binder_thread_write(处理media服务请求add的消息)

下面为第二次循环,用于处理media服务请求add的消息

//第二个循环,取出第二个media服务请求add的消息
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;
	//第二个循环
	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))//此时cmd是BC_TRANSACTION
			return -EFAULT;
		ptr += sizeof(uint32_t);//跳过命令的位置
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_TRANSACTION:
		case BC_REPLY: {
			struct binder_transaction_data tr;

			if (copy_from_user(&tr, ptr, sizeof(tr)))//从用户空间拷贝数据到内核空间,
				return -EFAULT;
			ptr += sizeof(tr);//指针后移,此时应该是到末尾了
			binder_transaction(proc, thread, &tr, cmd == BC_REPLY);//参数分析:
			//mediaservice服务对应的proc,
			//meidia服务对应的binder thread
			//tr消息
			//cmd是BC_TRANSACTION,所以此时是0
			break;
		}
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

2.3.13 binder_transaction

主要作用为:

1.首先会找到目标的进程的proc对象和目标进程的线程。

2.然后取出meidapalyservice服务塞入Parcel对象中binder实体对象,并在驱动中创建meidapalyservice服务对应的bbbinder实体,并将此bbbinder实体添加到meidiaplayservice服务的比红黑树中,然后在目标进程(SM进程)的binder_proc中创建对应的binder_ref红黑树节点,将mediaservice的引用添加到SM进程的引用的红黑树中。

3.然后在sm之前映射的进程中分配一块内核buffer空间,将add添加服务的消息数据放入sm进程的缓冲区中。

4.然后将此add服务的消息添加到sm进程的todo队列中。并且唤醒sm进程。

5.同时也会生成一个未完成的事务消息放入到meidaservice的进程队列的todo队列中。

然后就分两个流程走了。

第一个流程是客户端(meidaservice服务)BINDER_WORK_TRANSACTION_COMPLETE事务,然后陷入休眠,等待服务端返回的reply。
第二个流程是binder驱动唤醒sm服务端,然后sm服务处理来自meidaservice的add请求,并将回复发送给驱动,驱动再唤醒等待的meidaservice服务,然后meidaservice服务从驱动中读取reply。

static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply)
				    //reply是0
				    //tr的数据如下
					//tr.target.ptr = 0; //不用ptr,因为此时要发送的是SM服务的handle句柄
					//tr.target.handle = handle;//SM服务的handle句柄0
					//tr.code = code;//code是3,也是ADD_SERVICE_TRANSACTION
					//tr.flags = binderFlags;//允许使用文件描述符进行答复,值是0X10
					//tr.cookie = 0;//实体的cookie才有用
					//tr.sender_pid = 0;//发送方pid
					//tr.sender_euid = 0;//发送方uid
					//tr.data_size = data.ipcDataSize();//设置数据的大小
					//tr.data.ptr.buffer = data.ipcData();//指向数据的首地址
					//tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);//ipcObjectsCount函数返回的是mObjectsSize,而mObjects
					//存储的是多个扁平的binder对象的位置。是offsets_size的偏移大小
					//tr.data.ptr.offsets = data.ipcObjects();//指向mObjects数组的首地址。mObjects里面记录了所有binder实体或引用在buffer中的位置。
{
	struct binder_transaction *t;//描述Binder进程中通信过程,这个过程称为一个transaction(事务),保存了源线程和目标线程等消息。
	struct binder_work *tcomplete;//用来描述的处理的工作事项
	size_t *offp, *off_end;
	struct binder_proc *target_proc;//目标进程的proc,此处是ServiceManager的proc,因为是向SM去请求add服务
	struct binder_thread *target_thread = NULL;//sm的binder线程
	struct binder_node *target_node = NULL;//sm的binder实体
	struct list_head *target_list;
	wait_queue_head_t *target_wait;//目标等待队列,当Binder处理事务A依赖于其他Binder线程处理事务B的情况
     // 则会在sleep在wait所描述的等待队列中,直到B事物处理完毕再唤醒
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;//log事务
	uint32_t return_error;
	


	if (reply) {//此时reply为0
	..........
	}
	else {
		/**
		if (tr->target.handle) {
			此时是0,故不执行
			struct binder_ref *ref;
			ref = binder_get_ref(proc, tr->target.handle);// 由handle 找到相应 binder_ref, 由binder_ref 找到相应 binder_node
			if (ref == NULL) {
				binder_user_error("binder: %d:%d got "
					"transaction to invalid handle\n",
					proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_invalid_target_handle;
			}
			target_node = ref->node;//由binder_ref 找到相应 binder_node
			}
			*/
		else {
			target_node = binder_context_mgr_node;//找到目标binder实体节点,即SM的节点
			if (target_node == NULL) {
				return_error = BR_DEAD_REPLY;
				goto err_no_context_mgr_node;
			}
		}
		e->to_node = target_node->debug_id;//log事务。
		target_proc = target_node->proc;//找到目标进程的proc,即SM进程的proc
		/*
		if (target_proc == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}*/
		if (security_binder_transaction(proc->tsk, target_proc->tsk) < 0) {//检查Client进程是否有权限向Server进程发送请求
			return_error = BR_FAILED_REPLY;
			goto err_invalid_target_handle;
		}
		/*
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {//如果flag不是oneway,并且线程的transaction_stack存在内容
			struct binder_transaction *tmp;
			tmp = thread->transaction_stack;
			if (tmp->to_thread != thread) {
				binder_user_error("binder: %d:%d got new "
					"transaction with bad transaction stack"
					", transaction %d has target %d:%d\n",
					proc->pid, thread->pid, tmp->debug_id,
					tmp->to_proc ? tmp->to_proc->pid : 0,
					tmp->to_thread ?
					tmp->to_thread->pid : 0);
				return_error = BR_FAILED_REPLY;
				goto err_bad_call_stack;
			}
			while (tmp) {
				if (tmp->from && tmp->from->proc == target_proc)
					target_thread = tmp->from;
				tmp = tmp->from_parent;
			}
		}*/
	}
	/*
	if (target_thread) {//首次执行target_thread为空
		e->to_thread = target_thread->pid;
		target_list = &target_thread->todo;
		target_wait = &target_thread->wait;
	}*/
	else {
		target_list = &target_proc->todo;//sm进程的todo队列
		target_wait = &target_proc->wait;//sm进程的wait队列
	}
	e->to_proc = target_proc->pid;//log事务

	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);//在内核创建binder_transaction的空间
	/*
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}*/
	binder_stats_created(BINDER_STAT_TRANSACTION);//记录信息

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);//创建binder_work
	/*
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}*/
	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

	t->debug_id = ++binder_last_id;//调试信息,此时是1
	e->debug_id = t->debug_id;

	if (reply)//reply为真,代表是BC_REPLY
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "binder: %d:%d BC_REPLY %d -> %d:%d, "
			     "data %p-%p size %zd-%zd\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_thread->pid,
			     tr->data.ptr.buffer, tr->data.ptr.offsets,
			     tr->data_size, tr->offsets_size);
	else//reply为假,代表是BC_TRANSACTION
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "binder: %d:%d BC_TRANSACTION %d -> "
			     "%d - node %d, data %p-%p size %zd-%zd\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_node->debug_id,
			     tr->data.ptr.buffer, tr->data.ptr.offsets,
			     tr->data_size, tr->offsets_size);

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;//设置源线程为mediaservice的线程
	else
		t->from = NULL;
	t->sender_euid = proc->tsk->cred->euid;//设置源线程的用户id为mediaservice的进程的uid
	t->to_proc = target_proc;//目标进程的proc,此时是SM的proc对象,// 负责处理该事务的进程--ServiceManager
	t->to_thread = target_thread;//负责处理该事务的线程--ServiceManager的线程,但此时为空
	t->code = tr->code;//code是ADD_SERVICE_TRANSACTION,3
	t->flags = tr->flags;//允许使用文件描述符进行答复,值是0X10
	t->priority = task_nice(current);//priority是mediaservice服务的线程优先级

	trace_binder_transaction(reply, t, target_node);

	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));//从目标进程(SM进程)的target_proc中分配一块内核buffer空间,此buffer是映射了的地址空间。
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	trace_binder_transaction_alloc_buf(t->buffer);
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);//引用计数+1

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));//计算meidaservice服务的binder对象的偏移量

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {//拷贝用户空间的binder_transaction_data中ptr.buffer到目标进程的buffer-data的缓冲区。
		binder_user_error("binder: %d:%d got transaction with invalid "
			"data ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {//拷贝用户空间的ptr.offsets中ptr.buffer到目标进程(SM服务)的buffer缓冲区。
		binder_user_error("binder: %d:%d got transaction with invalid "
			"offsets ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) {
		binder_user_error("binder: %d:%d got transaction with "
			"invalid offsets size, %zd\n",
			proc->pid, thread->pid, tr->offsets_size);
		return_error = BR_FAILED_REPLY;
		goto err_bad_offset;
	}
	off_end = (void *)offp + tr->offsets_size;//计算偏移大小的末端
	for (; offp < off_end; offp++) {
		struct flat_binder_object *fp;
		if (*offp > t->buffer->data_size - sizeof(*fp) ||
		    t->buffer->data_size < sizeof(*fp) ||
		    !IS_ALIGNED(*offp, sizeof(void *))) {
			binder_user_error("binder: %d:%d got transaction with "
				"invalid offset, %zd\n",
				proc->pid, thread->pid, *offp);
			return_error = BR_FAILED_REPLY;
			goto err_bad_offset;
		}
		fp = (struct flat_binder_object *)(t->buffer->data + *offp);//取出flat_binder_object对象,里面存储的是meidaservice的BBbinder实体
		switch (fp->type) {//是BINDER_TYPE_BINDER,表示存储的是实体
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct binder_ref *ref;//binder引用
			struct binder_node *node = binder_get_node(proc, fp->binder);//从mediaservice服务的proc对象中获取里面获取此meidaservice的binder实体
			//此时是没有的,因为之前并未创建meidaservice对应的驱动层binder实体对象。
			if (node == NULL) {//mediaservice服务的proc对象中无此BBbinder节点
				node = binder_new_node(proc, fp->binder, fp->cookie);//为mediaservice服务的binder实体创建binder_node,
				//并添加到meidaservice的proc红黑树proc->nodes.rb_node中
				/*
				if (node == NULL) {
					return_error = BR_FAILED_REPLY;
					goto err_binder_new_node_failed;
				}*/
				node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
				node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
			}
			if (fp->cookie != node->cookie) {//检查缓存
				binder_user_error("binder: %d:%d sending u%p "
					"node %d, cookie mismatch %p != %p\n",
					proc->pid, thread->pid,
					fp->binder, node->debug_id,
					fp->cookie, node->cookie);
				goto err_binder_get_ref_for_node_failed;
			}
			if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {//检查权限
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			ref = binder_get_ref_for_node(target_proc, node);//在目标进程(SM进程)的binder_proc中创建对应的binder_ref红黑树节点,
			//即将mediaservice的引用添加到SM进程的引用的红黑树中。
			if (ref == NULL) {
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			if (fp->type == BINDER_TYPE_BINDER)
				fp->type = BINDER_TYPE_HANDLE;//将类型变为binder引用类型
			else
				fp->type = BINDER_TYPE_WEAK_HANDLE;
			fp->handle = ref->desc;//记录meidaservice服务的句柄是1,binder_get_ref_for_node函数中会将desc进行+1,保证其唯一性。
			binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
				       &thread->todo);

			trace_binder_transaction_node_to_ref(t, node, ref);
		} break;
	}
	/*
	if (reply) {//此时为false
		BUG_ON(t->buffer->async_transaction != 0);
		binder_pop_transaction(target_thread, in_reply_to);
	}*/
	else if (!(t->flags & TF_ONE_WAY)) {
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;//是双向的,所以需要回复
		t->from_parent = thread->transaction_stack;//等于当前线程(mediaservice服务)的transaction_stack
		thread->transaction_stack = t;//将传递数据保存在请求线程的中,以便后续缓存释放等
	}
	/**
	else {
		BUG_ON(target_node == NULL);
		BUG_ON(t->buffer->async_transaction != 1);
		if (target_node->has_async_transaction) {
			target_list = &target_node->async_todo;
			target_wait = NULL;
		} else
			target_node->has_async_transaction = 1;
	}
	*/
	t->work.type = BINDER_WORK_TRANSACTION;//将BINDER_WORK_TRANSACTION添加到目标队列(SM进程的队列),即target_proc->todo队列
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;//未完成的事务
	list_add_tail(&tcomplete->entry, &thread->todo);//将BINDER_WORK_TRANSACTION_COMPLETE添加到当前线程队列(mediaservice服务的todo队列),即thread->todo
	//注意此处非常重要,查阅大量书籍得知,当向当前线程的todo队列中插入一个未完成的事务时,min中就有了数据。
	if (target_wait)
		wake_up_interruptible(target_wait);//唤醒等待队列,本次通信的目标进程(SM进程的队列)队列为target_proc->wait
	return;

/**
err_get_unused_fd_failed:
err_fget_failed:
err_fd_not_allowed:
err_binder_get_ref_for_node_failed:
err_binder_get_ref_failed:
err_binder_new_node_failed:
err_bad_object_type:
err_bad_offset:
err_copy_data_failed:
	trace_binder_transaction_failed_buffer_release(t->buffer);
	binder_transaction_buffer_release(target_proc, t->buffer, offp);
	t->buffer->transaction = NULL;
	binder_free_buf(target_proc, t->buffer);
err_binder_alloc_buf_failed:
	kfree(tcomplete);
	binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
err_alloc_tcomplete_failed:
	kfree(t);
	binder_stats_deleted(BINDER_STAT_TRANSACTION);
err_alloc_t_failed:
err_bad_call_stack:
err_empty_call_stack:
err_dead_binder:
err_invalid_target_handle:
err_no_context_mgr_node:
	binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
		     "binder: %d:%d transaction failed %d, size %zd-%zd\n",
		     proc->pid, thread->pid, return_error,
		     tr->data_size, tr->offsets_size);

	{
		struct binder_transaction_log_entry *fe;
		fe = binder_transaction_log_add(&binder_transaction_log_failed);
		*fe = *e;
	}

	BUG_ON(thread->return_error != BR_OK);
	if (in_reply_to) {
		thread->return_error = BR_TRANSACTION_COMPLETE;
		binder_send_failed_reply(in_reply_to, return_error);
	} else
		thread->return_error = return_error;
*/
}

2.4 mediapalyservice服务陷入休眠

此时我们先分析第一个流程。

第一个流程是客户端(meidaservice服务)BINDER_WORK_TRANSACTION_COMPLETE事务,然后陷入休眠,等待服务端返回的reply。

此时从上文我们知道

binder_transaction返回到binder_thread_write,然后再返回到2.3.10 的binder_ioctl函数。我们便从这里开始继续分析。

2.4.1 binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出media服务进程对应的porc对象
	struct binder_thread *thread;//media服务进程的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回0,不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {//如果命令是BINDER_WRITE_READ
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {//判断大小
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//从用户空间拷贝数据到内核空间
			ret = -EFAULT;
			goto err;
		}

		if (bwr.write_size > 0) {//此时写大于0
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			trace_binder_write_done(ret);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {//此时读的大小大于0,代表我们需要从驱动中读取消息
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			//此函数下面会有大量展开参数分析:
			//proc,meidaservice服务的proc
			//bwr.read_buffer,read_buffer的地址
			//read_size>0
			//read_consumed,代表已消费的字节数0
			//最后一个参数是0
			trace_binder_read_done(ret);
			/**if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}*/
		}
		binder_debug(BINDER_DEBUG_READ_WRITE,
			     "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			     proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
			     bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.4.2 binder_thread_read

1.第一次循环,首先会往read_buffer中塞入BR_NOOP,然后从todo队列中取出BINDER_WORK_TRANSACTION_COMPLETE事务,转化为BR_TRANSACTION_COMPLETE,并放入read_buffer中。
2.第二次循环,检查todo队列中是否还存在事务。此时不存在,则返回read_buffer,返回

2.4.1 binder_ioctl函数,此函数会拷贝数据到用户空间。

第一次循环:

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      void  __user *buffer, int size,
			      signed long *consumed, int non_block)
			//参数分析:
			//proc,meidaservice服务的proc
			//buffer=bwr.read_buffer,read_buffer的地址
			//size=read_size>0
			//read_consumed,代表已消费的字节数0
			//non_block是0,代表阻塞
{
	void __user *ptr = buffer + *consumed;//指向首地址
	void __user *end = buffer + size;//指向尾端地址

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))//往用户指向的空间里面放一个BR_NOOP
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL &&
				list_empty(&thread->todo);//此时线程的todo队列中有一个未完成的事务
	/**
	if (thread->return_error != BR_OK && ptr < end) {
		if (thread->return_error2 != BR_OK) {
			if (put_user(thread->return_error2, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			binder_stat_br(proc, thread, thread->return_error2);
			if (ptr == end)
				goto done;
			thread->return_error2 = BR_OK;
		}
		if (put_user(thread->return_error, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		binder_stat_br(proc, thread, thread->return_error);
		thread->return_error = BR_OK;
		goto done;
	}
	*/


	thread->looper |= BINDER_LOOPER_STATE_WAITING;
	/**
	if (wait_for_proc_work)//此时不为真,因为队列中有数据
		proc->ready_threads++;

	binder_unlock(__func__);

	trace_binder_wait_for_work(wait_for_proc_work,
				   !!thread->transaction_stack,
				   !list_empty(&thread->todo));
	if (wait_for_proc_work) {
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
					BINDER_LOOPER_STATE_ENTERED))) {
			binder_user_error("binder: %d:%d ERROR: Thread waiting "
				"for process work before calling BC_REGISTER_"
				"LOOPER or BC_ENTER_LOOPER (state %x)\n",
				proc->pid, thread->pid, thread->looper);
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);
		}
		binder_set_nice(proc->default_priority);
		if (non_block) {
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	}*/
	else {
		if (non_block) {//如果非阻塞
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));//此时thread中有消息是不会阻塞的
	}

	binder_lock(__func__);
	/**
	if (wait_for_proc_work)
		proc->ready_threads--;
	*/
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;//取消当前线程正在等待的标志

	/**if (ret)
		return ret;*/

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);//取出BINDER_WORK_TRANSACTION_COMPLETE事务
		/**
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added 
				goto retry;
			break;
		}*/
		/**
		if (end - ptr < sizeof(tr) + 4)
			break;
		*/

		switch (w->type) {
		/**
		case BINDER_WORK_TRANSACTION: {
			t = container_of(w, struct binder_transaction, work);
		} break;
		*/
		case BINDER_WORK_TRANSACTION_COMPLETE: {//此时走这里。
			cmd = BR_TRANSACTION_COMPLETE;//生成BR_TRANSACTION_COMPLETE
			if (put_user(cmd, (uint32_t __user *)ptr))//将此命令放入用户空间中,此时有两个命令BR_TRANSACTION_COMPLETE和BR_NOOP
				return -EFAULT;
			ptr += sizeof(uint32_t);

			binder_stat_br(proc, thread, cmd);
			binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
				     "binder: %d:%d BR_TRANSACTION_COMPLETE\n",
				     proc->pid, thread->pid);

			list_del(&w->entry);//删除w代表的BINDER_WORK_TRANSACTION_COMPLETE事务,因为此时已经用完了
			kfree(w);//释放w的空间
			binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
		} break;
		}

		if (!t)
			continue;//开启下一次循环
			.....
}

第二次循环:

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      void  __user *buffer, int size,
			      signed long *consumed, int non_block)
{

	while (1) {//第二个循环
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		/**
		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		*/
		else {//此时走这里
			/**不执行
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added 
				goto retry;
			break;
			*/
		}

		if (end - ptr < sizeof(tr) + 4)//根据书上的流程,此时应该走这里,退出循环,因为此时end是readbuffer的尾地址
		//而ptr是第8个字节的位置,里面有两个命令分别是BR_TRANSACTION_COMPLETE和BR_NOOP
			break;
	}

done:

	*consumed = ptr - buffer;//值是8,里面是两个命令BR_NOOP和BR_TRANSACTION_COMPLETE和
	return 0;
}

2.4.3 IPCThreadState::talkWithDriver

此时我们流程回到了2.3.9中的talkWithDriver函数。前面部分一样,我们主要分析bwr.write_consumed>0,这一块。然后会继续返回到2.3.8的IPCThreadState::waitForResponse函数。

截取部分为:

    if (err >= NO_ERROR) {//代表读取成功
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())//如果驱动消费的数据大小小于mout的大小,则说明驱动没有消费mout数据
                LOG_ALWAYS_FATAL("Driver did not consume write buffer. "
                                 "err: %s consumed: %zu of %zu",
                                 statusToString(err).c_str(),
                                 (size_t)bwr.write_consumed,
                                 mOut.dataSize());
            else {//代表mout被正确消费
                mOut.setDataSize(0);//重置数据大小为0
                processPostWriteDerefs();//主要是将写的引用计数减少1,释放
            }
        }
        //此时bwr中有两个数据,一个BR_NOOP和BR_TRANSACTION_COMPLETE
        if (bwr.read_consumed > 0) {//如果驱动读的数据大小大于0
            mIn.setDataSize(bwr.read_consumed);//设置mIn的大小
            mIn.setDataPosition(0);//设置min数据起始位置
        }
        
        return NO_ERROR;
    }

 源部分为:

status_t IPCThreadState::talkWithDriver(bool doReceive)//doReceive默认是true
{
    /**
    if (mProcess->mDriverFD < 0) {
        return -EBADF;
    }
    */

    binder_write_read bwr;
    //struct binder_write_read {
    //binder_size_t        write_size;//要写入的字节数,write_buffer的总字节数
    //binder_size_t        write_consumed;//驱动程序占用的字节数,write_buffer已消费的字节数
    //binder_uintptr_t    write_buffer;//写缓冲数据的指针
    //binder_size_t        read_size;//要读的字节数,read_buffer的总字节数
    //binder_size_t        read_consumed;//驱动程序占用的字节数,read_buffer已消费的字节数
    //binder_uintptr_t    read_buffer;//读缓存数据的指针
    //};

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//此时needRead是ture
    //因为mIn.dataPosition返回值和mIn.dataSize相等,很奇怪


    //当我们正在从min中读取数据,或者调用方打算读取下一条数据(doReceive为true时),我们不会写入任何内容。
    //此时doReceive是false,且无数据可读,所以我们是会写入数据的
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;//write_size是mOut.dataSize()
    bwr.write_buffer = (uintptr_t)mOut.data();

    
    if (doReceive && needRead) {//当我们正在读的时候。
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
/*    else {//当不读的时候,设置读的大小和buffer为0
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/
    

    // 如果读缓冲区和写缓冲区都为0,代表无事可做,立即返回,此时write_size中有数据。
    /**
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
    */

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//向binder驱动区写,因为此时是add服务,所以mout有数据
        
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        /**
        if (mProcess->mDriverFD < 0) {
            err = -EBADF;
        }
        */
    } while (err == -EINTR);


    if (err >= NO_ERROR) {//代表读取成功
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())//如果驱动消费的数据大小小于mout的大小,则说明驱动没有消费mout数据
                LOG_ALWAYS_FATAL("Driver did not consume write buffer. "
                                 "err: %s consumed: %zu of %zu",
                                 statusToString(err).c_str(),
                                 (size_t)bwr.write_consumed,
                                 mOut.dataSize());
            else {//代表mout被正确消费
                mOut.setDataSize(0);//重置数据大小为0
                processPostWriteDerefs();//主要是将写的引用计数减少1,释放
            }
        }
        //此时bwr中有两个数据,一个BR_NOOP和BR_TRANSACTION_COMPLETE
        if (bwr.read_consumed > 0) {//如果驱动读的数据大小大于0
            mIn.setDataSize(bwr.read_consumed);//设置mIn的大小
            mIn.setDataPosition(0);//设置min数据起始位置
        }
        
        return NO_ERROR;
    }

    ///return err;
}

2.4.4 IPCThreadState::waitForResponse

1.第一次循环会取出BR_NOOP,此命令啥都没做。

2.第二次循环会取出BR_TRANSACTION_COMPLETE,如果此时消息是异步的消息,则此次通信

结束,miedaplayservice陷入休眠等待来自驱动的回复消息,如果是同步的消息,则会进入第三次循环,此次循环,miedaplayservice会休眠,等待来自驱动的消息和唤醒。

第一次循环:

//第一次循环取出BR_NOOP
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)//acquireResult默认是nullptr
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;//第一次循环时,向驱动发送请求add消息,mout中有值
        
        err = mIn.errorCheck();//检查数据是否错误
        if (err < NO_ERROR) break;
        //if (mIn.dataAvail() == 0) continue;此时min中有数据,故不会执行

        cmd = (uint32_t)mIn.readInt32();//取出第一个cmd,此时是BR_NOOP

        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {//此时是BR_NOOP,没找到,所以执行executeCommand


        default:
            err = executeCommand(cmd);//查看第47行
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}
status_t IPCThreadState::executeCommand(int32_t cmd)//此时cmd是BR_NOOP,此命令没做处理
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch ((uint32_t)cmd) {
 
    case BR_NOOP:
        break;
    }

    if (result != NO_ERROR) {
        mLastError = result;
    }

    return result;
}

第二次循环:

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)//acquireResult默认是nullptr
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;//第二次循环
        err = mIn.errorCheck();//检查数据是否错误
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;//此时min中有数据,故不会执行

        cmd = (uint32_t)mIn.readInt32();//取出第二个cmd,此时是BR_TRANSACTION_COMPLETE

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;//如果是异步,则走到finish中,如果是同步,则进入下一次while循环
            break;     
       
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

2.4.5 talkWithDriver第三次循环(同步消息才会有):

1.此时meidalservice服务陷入休眠。

status_t IPCThreadState::talkWithDriver(bool doReceive)//默认doReceive是true
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//此时needRead是true

    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;//此时outAvail是0

    bwr.write_size = outAvail;//write_size是0
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {//需要读取数据
        bwr.read_size = mIn.dataCapacity();//设置需要读取的大小。
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
     /*else {//不执行
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/



    // 不会执行
    //if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {

#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//从驱动中读取消息,在这里面
        //会线程休眠,//查看【3.3.2】binder_ioctl
            err = NO_ERROR;
        else
            err = -errno;
		/*
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }

    return err;
	*/
}

2.4.6 binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;//取出meidaservice进程对应的porc对象
    struct binder_thread *thread;//meidaservice进进程的binder线程
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

    trace_binder_ioctl(cmd, arg);

    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回,不休眠
    if (ret)
        goto err_unlocked;

    binder_lock(__func__);
    thread = binder_get_thread(proc);//获取binder_thread
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }

    switch (cmd) {
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
        if (size != sizeof(struct binder_write_read)) {//计算数据是否和规范
            ret = -EINVAL;
            goto err;
        }
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//拷贝数据到内核空间
            ret = -EFAULT;
            goto err;
        }


        /*if (bwr.write_size > 0) {//此时write_siz=0,不执行
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
            trace_binder_write_done(ret);
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
      */
        if (bwr.read_size > 0) {//此时read_size>0
            ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
            //此时meidaservice的binder线程会阻塞在这里。后面暂不执行,等待唤醒时,才会执行
            /*
            if (!list_empty(&proc->todo))
                wake_up_interruptible(&proc->wait);
            if (ret < 0) {
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        binder_debug(BINDER_DEBUG_READ_WRITE,
                 "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
                 proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
                 bwr.read_consumed, bwr.read_size);
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    




    }
    ret = 0;
err:
    if (thread)
        thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
    binder_unlock(__func__);
    wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret && ret != -ERESTARTSYS)
        printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
    trace_binder_ioctl_done(ret);
    return ret;
 */
}

2.4.7 binder_thread_read

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  void  __user *buffer, int size,
                  signed long *consumed, int non_block)
         //参数分析:
         //proc是media是mediaservice服务的proc
         //thread是meidaservice服务的线程
         //buffer指向read_buffer,读的首地址
         //read_size>0
         //read_consumed是0
         //non_block是0,表示是阻塞的
{
    void __user *ptr = buffer + *consumed;//指向buffer首地址
    void __user *end = buffer + size;//指向尾地址

    int ret = 0;
    int wait_for_proc_work;

    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))//向里面放一个BR_NOOP命令
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }

retry:
    wait_for_proc_work = thread->transaction_stack == NULL &&
                list_empty(&thread->todo);//此时当前线程的运输事务不空,即transaction_stack不为空



    thread->looper |= BINDER_LOOPER_STATE_WAITING;//设置等待的flag
    /*if (wait_for_proc_work)//不执行
        proc->ready_threads++;*/

    binder_unlock(__func__);


    /*if (wait_for_proc_work) {//wait_for_proc_work是false,不执行
        if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
                    BINDER_LOOPER_STATE_ENTERED))) {

            wait_event_interruptible(binder_user_error_wait,
                         binder_stop_on_user_error < 2);
        }
        binder_set_nice(proc->default_priority);
        if (non_block) {
            if (!binder_has_proc_work(proc, thread))
                ret = -EAGAIN;
        } else
            ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } */
 else {//走这里
        /*if (non_block) {不执行
            if (!binder_has_thread_work(thread))
                ret = -EAGAIN;
        } */
  else
            ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));//meidaservice线程陷入休眠
    }
//后面都不执行,故省略.....
}

2.5 ServiceManager被唤醒

在2.3.13 binder_transaction的最后,我们知道mediaplayservice服务往servicemanager服务的todo队列中,插入了add消息,并且唤醒了servicemanager服务。

在启动篇,我们知道servicemanager没有消息的时候,会阻塞休眠。我们回到servicemanager启动分析线程阻塞处。如有疑问,可先看启动篇。

我们先回顾一下servicemanager的休眠时的函数调用栈。

binder_loop->binder_ioctl->binder_thread_read->然后休眠阻塞。

2.5.1 binder_thread_read

1.当servicemanager被唤醒后,则会从tod队列中取出获取binder_transaction_data消息。并写入

BR_TRANSACTION。

2.将binder_transaction_data数据拷贝到用户空间,注意此处只是执行了拷贝命令,但没有拷贝数据,binder只有一次数据拷贝。

//此时在serviceManager端。
static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      void  __user *buffer, int size,
			      signed long *consumed, int non_block)
				//参数分析:
				//sm对应的proc对象
				//binder线程对象
				//读buffer的首地址,
				//size是32字节
				//consumed是0,代表驱动写入readbuffer的大小
				//non_block是0,代表阻塞
{
	void __user *ptr = buffer + *consumed;//还是数据首地址
	void __user *end = buffer + size;//数据尾地址

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {//当消费数量为空时,将BR_NOOP放入ptr,即放入了read buffer,那么就是覆盖了BC_ENTER_LOOPER
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);//readbuffer往后移动跳过命令的位置。
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL &&
				list_empty(&thread->todo);//此时todo为空,并且transaction_stack也为空,即wait_for_proc_work为true
				//如果一个线程的的事务堆栈 transaction_stack 不等于 NULL, 表示它正在等待其他线程完成另外一个事务.
				//如果一个线程的 todo 队列不等于 NULL, 表示该线程有未处理的工作项.
				//一个线程只有在其事务堆栈 transaction_stack 为 NULL, 并且 todo 队列为 NULL 时, 才可以去处理其所属进程todo 队列中的待处理工作项. 
				//否则就要处理其事务堆栈 transaction_stack 中的事物或者 todo 队列中的待处理工作项.
	/**
	if (thread->return_error != BR_OK && ptr < end) {//如果当前线程的状态是错误,并且readbuffer有空间,则写入错误信息。
		if (thread->return_error2 != BR_OK) {
			if (put_user(thread->return_error2, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			binder_stat_br(proc, thread, thread->return_error2);
			if (ptr == end)
				goto done;
			thread->return_error2 = BR_OK;
		}
		if (put_user(thread->return_error, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		binder_stat_br(proc, thread, thread->return_error);
		thread->return_error = BR_OK;
		goto done;
	}
	*/


	thread->looper |= BINDER_LOOPER_STATE_WAITING;//BINDER_LOOPER_STATE_WAITING 表示该线程正处于空闲状态
	if (wait_for_proc_work)//无任何任务处理
		proc->ready_threads++;//进程中空闲binder线程加1

	binder_unlock(__func__);

	trace_binder_wait_for_work(wait_for_proc_work,
				   !!thread->transaction_stack,
				   !list_empty(&thread->todo));
	if (wait_for_proc_work) {//当进程todo队列没有数据,则进入休眠等待状态
		/**
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
					BINDER_LOOPER_STATE_ENTERED))) {//如果当前线程还没有注册过,即还未发送BC_ENTER_LOOPER指令,而我们挂起了该线程,即为出错。
			binder_user_error("binder: %d:%d ERROR: Thread waiting "
				"for process work before calling BC_REGISTER_"
				"LOOPER or BC_ENTER_LOOPER (state %x)\n",
				proc->pid, thread->pid, thread->looper);
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);//不休眠
		}
		*/
		binder_set_nice(proc->default_priority);
		/**
		if (non_block) {//如果是非阻塞的。但是binder通信一般是阻塞的
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		}
		*/
		else
			ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));//当进程todo队列没有数据,则进入休眠等待状态,
		//待直到其所属的进程有新的未处理工作项为止.
	} 
	
	
	
	//此时被唤醒后,执行后半段。
	binder_lock(__func__);

	if (wait_for_proc_work)
		proc->ready_threads--;//空闲线程减1
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;//取消等待的标志

	if (ret)
		return ret;

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;//运输事务的数据
		struct binder_work *w;
		struct binder_transaction *t = NULL;//运输事务
		/**先从线程todo队列获取事务数据
		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		*/
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
		//线程todo队列没有数据, 则从进程todo对获取事务数据
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		/**
		else {//没有数据,则返回retry
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) 
				goto retry;
			break;
		}
		*/

		if (end - ptr < sizeof(tr) + 4)//如果数据不规范,则break
			break;

		switch (w->type) {
		case BINDER_WORK_TRANSACTION: {//获取transaction数据
			t = container_of(w, struct binder_transaction, work);
		} break;
		}

		/**if (!t)//如果数据错误
			continue;
		*/

		BUG_ON(t->buffer == NULL);
		if (t->buffer->target_node) {
			struct binder_node *target_node = t->buffer->target_node;//取出目标binder实体此时是SM的binder实体
			tr.target.ptr = target_node->ptr;//SM的binder实体BBinder对应的bpBiner引用
			tr.cookie =  target_node->cookie;//SM的binder实体本地地址
			t->saved_priority = task_nice(current);//保存之前的优先级
			if (t->priority < target_node->min_priority &&
			    !(t->flags & TF_ONE_WAY))
				binder_set_nice(t->priority);//设置较大的优先级
			else if (!(t->flags & TF_ONE_WAY) ||
				 t->saved_priority > target_node->min_priority)
				binder_set_nice(target_node->min_priority);
			cmd = BR_TRANSACTION;//设置cmd是BR_TRANSACTION
		}
		/**
		else {
			tr.target.ptr = NULL;
			tr.cookie = NULL;
			cmd = BR_REPLY;
		}
		*/
	
		tr.code = t->code;//code是ADD_SERVICE_TRANSACTION,3
		tr.flags = t->flags;//允许使用文件描述符进行答复,值是0X10
		tr.sender_euid = t->sender_euid;//mediaservice的进程的uid

		if (t->from) {
			struct task_struct *sender = t->from->proc->tsk;
			tr.sender_pid = task_tgid_nr_ns(sender,//mediaservice的进程的pid
							current->nsproxy->pid_ns);
		} else {
			tr.sender_pid = 0;
		}

		tr.data_size = t->buffer->data_size;//buffer大小
		tr.offsets_size = t->buffer->offsets_size;
		tr.data.ptr.buffer = (void *)t->buffer->data +
					proc->user_buffer_offset;//buffer
		tr.data.ptr.offsets = tr.data.ptr.buffer +
					ALIGN(t->buffer->data_size,
					    sizeof(void *));//offsets大小

		if (put_user(cmd, (uint32_t __user *)ptr))//将cmd命令写回用户空间,此时是命令是BR_NOOP和BR_TRANSACTION
			return -EFAULT;
		ptr += sizeof(uint32_t);//跳过cmd
		if (copy_to_user(ptr, &tr, sizeof(tr)))//拷贝内核数据到用户空间
			return -EFAULT;
		ptr += sizeof(tr);

		trace_binder_transaction_received(t);
		binder_stat_br(proc, thread, cmd);

		list_del(&t->work.entry);
		t->buffer->allow_user_free = 1;
		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
			t->to_parent = thread->transaction_stack;
			t->to_thread = thread;
			thread->transaction_stack = t;//设置当前SMbinder线程的运输事务是t
		} 
		/*else {
			t->buffer->transaction = NULL;
			kfree(t);
			binder_stats_deleted(BINDER_STAT_TRANSACTION);
		}
		*/
		break;
	}

done:

	*consumed = ptr - buffer;//消费的大小
	if (proc->requested_threads + proc->ready_threads == 0 &&
	    proc->requested_threads_started < proc->max_threads &&
	    (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
	     BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
	     /*spawn a new thread if we leave this out */) {
		proc->requested_threads++;
		binder_debug(BINDER_DEBUG_THREADS,
			     "binder: %d:%d BR_SPAWN_LOOPER\n",
			     proc->pid, thread->pid);
		if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
			return -EFAULT;
		binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
	}
	return 0;
}
static int binder_has_proc_work(struct binder_proc *proc,
                struct binder_thread *thread)
{
    return !list_empty(&proc->todo) ||
        (thread->looper & BINDER_LOOPER_STATE_NEED_RETURN);//此时todo不为空,故返回false,故不再阻塞。
}

2.5.2 binder_ioctl

此时根据调用流程,我们再回到了binder_loop->binder_ioctl。

1.再次拷贝数据到用户空间,注意此处只是执行了拷贝命令,但没有拷贝数据,binder只有一次数据拷贝。

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;
	struct binder_thread *thread;//binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {//检查大小是否正常
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//拷贝用户空间数据到内核空间
			ret = -EFAULT;
			goto err;
		}
		/**
		if (bwr.write_size > 0) {//此时为0,不执行
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			trace_binder_write_done(ret);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		*/
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			//此时会阻塞,等待消息的到来。
			
			trace_binder_read_done(ret);
			if (!list_empty(&proc->todo))//如果list队列不为空,则唤醒线程
				wake_up_interruptible(&proc->wait);
				
			/**
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
			*/
		}
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//复制内核空间数据到用户空间,注意此处只是执行了拷贝命令,但没有拷贝数据,binder只有一次数据拷贝。
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
	
}

2.5.3 binder_loop

再次返回到此binder_loop函数中。

1.当SM拿到数据后,会调用binder_parse对数据进行解析。

//此时回到ServiceManager此函数,
void binder_loop(struct binder_state *bs, binder_handler func)
//参数分析:bs是存储了binder的三个信息。func是回调函数svcmgr_handler
{
    int res;
    struct binder_write_read bwr;//一个结构体
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;//向binder驱动发送命令协议BC_ENTER_LOOPER,告诉binder驱动"本线程要进入循环状态了"
    binder_write(bs, readbuf, sizeof(uint32_t));//下文有展开。只写入,即BC_ENTER_LOOPER

    for (;;) {//死循环,从驱动中读取消息
        bwr.read_size = sizeof(readbuf);//此时是BC_ENTER_LOOPER的大小,32字节
        bwr.read_consumed = 0;//
        bwr.read_buffer = (uintptr_t) readbuf;//数据是BC_ENTER_LOOPER

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//无消息时,会阻塞在此处,等待有消息,然后调用binder_parse去解析消息。
		//此时readbuffer有数据了。

        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
		//参数分析:
		//bs结构体
		//readbuf,readbuf首地址
		//readbuf的消息大小
		//func是回调函数svcmgr_handler
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}

2.5.4 binder_parse

此时数据中有两个指令。

1.BR_NOOP指令。此命令没做什么。

2.BR_TRANSACTION指令。首先会取出这个命令后面携带数据的binder_transaction_data消息,

然后调用回调函数去处理,如果是异步的,则会释放掉数据,如果是同步的,则会向驱动发送回复的消息。

第一次循环会先处理BR_NOOP指令。

//第一次循环解析数据BR_NOOP
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
				 //参数分析:
				//bs结构体
				//bio是0
				//readbuf,readbuf首地址
				//readbuf的消息大小
				//func是回调函数svcmgr_handler
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;//计算数据尾地址

    while (ptr < end) {//死循环取数据
        uint32_t cmd = *(uint32_t *) ptr;//从buffer取出cmd,此时第一个是BR_NOOP,第二个是BR_TRANSACTION
        ptr += sizeof(uint32_t);//指针位置后移
#if TRACE
        fprintf(stderr,"%s:\n", cmd_name(cmd));
#endif
        switch(cmd) {
        case BR_NOOP:
            break;
        }
    }

    /**return r;此时并没有返回,要处理第二个数据*/
}

第二次循环会处理BR_TRANSACTION指令。

//第二次循环解析数据BR_TRANSACTION
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
				 //参数分析:
				//bs结构体
				//bio是0
				//readbuf,readbuf首地址
				//readbuf的消息大小
				//func是回调函数svcmgr_handler
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;//计算数据尾地址

    while (ptr < end) {//死循环取数据
        uint32_t cmd = *(uint32_t *) ptr;//从buffer取出cmd,此时第二个是BR_TRANSACTION
        ptr += sizeof(uint32_t);//指针位置后移
#if TRACE
        fprintf(stderr,"%s:\n", cmd_name(cmd));
#endif
        switch(cmd) {
        case BR_TRANSACTION_SEC_CTX:
        case BR_TRANSACTION: {
            struct binder_transaction_data_secctx txn;
			/**
            if (cmd == BR_TRANSACTION_SEC_CTX) {//不执行
                if ((end - ptr) < sizeof(struct binder_transaction_data_secctx)) {
                    ALOGE("parse: txn too small (binder_transaction_data_secctx)!\n");
                    return -1;
                }
                memcpy(&txn, (void*) ptr, sizeof(struct binder_transaction_data_secctx));
                ptr += sizeof(struct binder_transaction_data_secctx);
            } 
			*/else /* BR_TRANSACTION */ {//BR_TRANSACTION
                if ((end - ptr) < sizeof(struct binder_transaction_data)) {//检查数据大小是否正确
                    ALOGE("parse: txn too small (binder_transaction_data)!\n");
                    return -1;
                }
                memcpy(&txn.transaction_data, (void*) ptr, sizeof(struct binder_transaction_data));//将binder_transaction_data拷贝到transaction_data中
                ptr += sizeof(struct binder_transaction_data);//位置移动

                txn.secctx = 0;
            }

            binder_dump_txn(&txn.transaction_data);
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg;//消息
                struct binder_io reply;//回复消息
                int res;

                bio_init(&reply, rdata, sizeof(rdata), 4);//初始化空的回复消息
                bio_init_from_txn(&msg, &txn.transaction_data);//从txn.transaction_data 解析出binder_io的信息,存入msg
                res = func(bs, &txn, &msg, &reply);//调用回调函数去处理
                if (txn.transaction_data.flags & TF_ONE_WAY) {//如果是TF_ONE_WAY处理,则释放txn->data的数据
                    binder_free_buffer(bs, txn.transaction_data.data.ptr.buffer);
                } else {//如果不是TF_ONE_WAY处理,给binder驱动回复数据
                    binder_send_reply(bs, &reply, txn.transaction_data.data.ptr.buffer, res);//参数分析:
					//bs是sm进程的信息
					//reply中有一个0
					//txn.transaction_data.data.ptr.buffer,发送方数据的首地址
					//res=0
                }
            }
            break;
        }
        }
    }

    return r;
}

2.5.5 bio_init

1.用于初始化空的回复消息。

void bio_init(struct binder_io *bio, void *data,
              size_t maxdata, size_t maxoffs)
			  //参数&reply, rdata, sizeof(rdata), 4
			  //bio是reply
			  //data是rdata[256/4]
			  //maxdata是大小是256
			  //maxoffs是4
{
    size_t n = maxoffs * sizeof(size_t);

    if (n > maxdata) {
        bio->flags = BIO_F_OVERFLOW;
        bio->data_avail = 0;
        bio->offs_avail = 0;
        return;
    }

    bio->data = bio->data0 = (char *) data + n;//指向了数据的第32位
    bio->offs = bio->offs0 = data;//指向了数据的首段
    bio->data_avail = maxdata - n;//可用数据是256-32
    bio->offs_avail = maxoffs;//偏移是4
    bio->flags = 0;
}

2.5.6 bio_init_from_txn

1.用于从txn.transaction_data 解析出binder_io的信息。

void bio_init_from_txn(struct binder_io *bio, struct binder_transaction_data *txn)
{
    bio->data = bio->data0 = (char *)(intptr_t)txn->data.ptr.buffer;//指向buffer数据的首地址
    bio->offs = bio->offs0 = (binder_size_t *)(intptr_t)txn->data.ptr.offsets;//偏移量
    bio->data_avail = txn->data_size;//可用数据为buffer数据的大小
    bio->offs_avail = txn->offsets_size / sizeof(size_t);//偏移的大小
    bio->flags = BIO_F_SHARED;
}

2.5.7 svcmgr_handler

此处便是处理消息的回调函数。

1.首先便是根据传入的数据,按字节取出对应的数据。

2.发现其是add服务的请求后,则取出meidaplay服务对应的string值和其mediaservice的binder的句柄值。

3.然后会将meidaservice服务的hanle和服务名称保存到svclist,完成注册。

//data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());//写入RPC头信息"android.os.IServiceManager"
//data.writeString16(name);//写入"media.player"
//data.writeStrongBinder(service);//把一个binder实体“打扁”并写入parcel, 服务的实体对象:new MediaPlayerService
//data.writeInt32(allowIsolated ? 1 : 0);//写入0
//data.writeInt32(dumpsysPriority);//写入8	
int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data_secctx *txn_secctx,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;
    uint32_t dumpsys_priority;

    struct binder_transaction_data *txn = &txn_secctx->transaction_data;

    //ALOGI("target=%p code=%d pid=%d uid=%d\n",
    //      (void*) txn->target.ptr, txn->code, txn->sender_pid, txn->sender_euid);

    if (txn->target.ptr != BINDER_SERVICE_MANAGER)//BINDER_SERVICE_MANAGER是0,txn->target.ptr是SM的句柄,判断目标的句柄是不是sm服务
        return -1;

    if (txn->code == PING_TRANSACTION)//如果code是之前的伪事务,用于确认sm是否已经注册好
        return 0;

	//从 msg的 binder_io.data的起始地址读取4个字节的内容,存入strict_policy,strict_policy现在不需要使用,可以直接忽略
    //然后msg->data 往后偏移4个字节,即忽略了开头的strict_policy;"android.os.IServiceManager"。msg->data_avail 缓冲区的剩余可用字节数减去4个字节。
    //msg->data0一直不变,指向数据缓冲区的起始地址
    strict_policy = bio_get_uint32(msg);
    bio_get_uint32(msg); //继续偏移4个字节,忽略了header("android.os.IServiceManager"),然后msg->data 往后偏移4个字节,即忽略了开头的strict_policy;
	//msg->data_avail 缓冲区的剩余可用字节数减去4个字节。
    s = bio_get_string16(msg, &len);
    if (s == NULL) {
        return -1;
    }

    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
        fprintf(stderr,"invalid id %s\n", str8(s, len));
        return -1;
    }
	/**seliux相关
    if (sehandle && selinux_status_updated() > 0) {
#ifdef VENDORSERVICEMANAGER
        struct selabel_handle *tmp_sehandle = selinux_android_vendor_service_context_handle();
#else
        struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();
#endif
        if (tmp_sehandle) {
            selabel_close(sehandle);
            sehandle = tmp_sehandle;
        }
    }*/

    switch(txn->code) {
	/**
    case SVC_MGR_GET_SERVICE: //获取服务
    case SVC_MGR_CHECK_SERVICE: //检查服务
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid,
                                 (const char*) txn_secctx->secctx);
        if (!handle)
            break;
        bio_put_ref(reply, handle);
        return 0;
	*/
    case SVC_MGR_ADD_SERVICE: //添加服务
        s = bio_get_string16(msg, &len); //读取服务的string名字和长度,保存在s和len中,此时是media.player
        if (s == NULL) {
            return -1;
        }
        handle = bio_get_ref(msg);//获取mediaservice的binder的句柄
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;//allow_isolated是0
        dumpsys_priority = bio_get_uint32(msg);//dumpsys_priority是8
		//注册服务
        if (do_add_service(bs, s, len, handle, txn->sender_euid, allow_isolated, dumpsys_priority,
                           txn->sender_pid, (const char*) txn_secctx->secctx))
			//参数分析:
			//bs是serviceManager的信息
			//s是字符串media.player
			//len是media.player的长度
			//handle是mediaservice的句柄,是会每次加1,此时如果只有两个服务则,是1
			//sender_euid,发送者id,即meidaservice的uid
			//allow_isolated是0
			//dumpsys_priority是8
			//sender_pid,发送者的pid,即meidaservice的pid
			//secctx安全上下文
			
            return -1;
        break;
    }

    bio_put_uint32(reply, 0);
    return 0;
}

2.5.8 bio_get_ref

//此函数作用:
//1.返回服务对应的句柄值。此时是meidaservice的对应的句柄
uint32_t bio_get_ref(struct binder_io *bio)
{
    struct flat_binder_object *obj;//binder扁平对象

    obj = _bio_get_obj(bio);
    if (!obj)
        return 0;

    if (obj->hdr.type == BINDER_TYPE_HANDLE)
        return obj->handle;//返回meidaservice的对应的句柄

    return 0;
}

2.5.9 do_add_service

1.查询此服务是否已经注册过,如果没有注册过,则将meidaservice服务的hanle和服务名称保存到svclist,完成注册。
2.向驱动发送消息请求为meidaservice服务的binder引用注册死亡通知。

//此函数作用:
//1.将meidaservice服务的hanle和服务名称保存到svclist,完成注册。
//2.向驱动发送消息请求为meidaservice服务的binder引用注册死亡通知
int do_add_service(struct binder_state *bs, const uint16_t *s, size_t len, uint32_t handle,
                   uid_t uid, int allow_isolated, uint32_t dumpsys_priority, pid_t spid, const char* sid) {
			//参数分析:
			//bs是serviceManager的信息
			//s是字符串media.player
			//len是media.player的长度
			//handle是mediaservice的句柄,是会每次加1,此时如果只有两个服务则,是1
			//sender_euid,发送者id,即meidaservice的uid
			//allow_isolated是0
			//dumpsys_priority是8
			//sender_pid,发送者的pid,即meidaservice的pid
			//secctx安全上下文
    struct svcinfo *si;

	
    //service的name长度不能大于127
    if (!handle || (len == 0) || (len > 127))
        return -1;

    if (!svc_can_register(s, len, spid, sid, uid)) {//最终调用selinux_check_access() 进行selinux的权限检查,检查服务是否有进行服务注册
        ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
             str8(s, len), handle, uid);
        return -1;
    }

    si = find_svc(s, len);//查询是否包含该media.player的svcinfo
	/**
    if (si) {//如果已经注册过,则覆盖之前的
        if (si->handle) {
            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
                 str8(s, len), handle, uid);
            svcinfo_death(bs, si);//服务已注册时,释放相应的服务
        }
        si->handle = handle;
    }
	*/
	else {//如果没注册过
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));//给svcinfo分配大小,因为里面保存名字的数组长度是0,
		//故需要(len + 1) * sizeof(uint16_t)分配string的长度,+1是因为字符串需要\0
        if (!si) {
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
                 str8(s, len), handle, uid);
            return -1;
        }
        si->handle = handle;//保存meidaservice的句柄
        si->len = len;//名字的长度
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));//拷贝名字
        si->name[len] = '\0';//赋值\0
        si->death.func = (void*) svcinfo_death;//binder死亡时,执行的函数
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;//0
        si->dumpsys_priority = dumpsys_priority;//8
        si->next = svclist;//指向下一个服务的svcinfo,svclist保存所有已注册的服务
        svclist = si;//更新下一个服务为meidaservice的svcinfo,用链表保存
    }

    binder_acquire(bs, handle);//以BC_ACQUIRE命令,handle为目标的信息,通过ioctl发送给binder驱动
    binder_link_to_death(bs, handle, &si->death);//以BC_REQUEST_DEATH_NOTIFICATION命令的信息,
	//通过ioctl发送给binder驱动,主要用于清理内存等收尾工作。BC_REQUEST_DEATH_NOTIFICATION作用是注册死亡通知。
    return 0;
}

2.5.10 find_svc

1.查询svclist中是否有包含当前mediaservice的svcinfo结构体,svclist存储的是已经向Sm注册过的的服务,包含了名称和handle值。

struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
    struct svcinfo *si;

    for (si = svclist; si; si = si->next) {//查询svclist中是否有包含当前mediaservice的svcinfo结构体
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
            return si;
        }
    }
    return NULL;
}

2.5.11 binder_acquire

1.此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1

//此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1
void binder_acquire(struct binder_state *bs, uint32_t target)
{
    uint32_t cmd[2];
    cmd[0] = BC_ACQUIRE;//让binder_ref的强引用计数+1
    cmd[1] = target;//此时target是meidaservice服务的handle
    binder_write(bs, cmd, sizeof(cmd));
}

2.5.12 binder_write

//此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1
int binder_write(struct binder_state *bs, void *data, size_t len)
//此时参数:data=cmd数组,cmd[0] = BC_ACQUIRE;cmd[1] = meidaservice服务的handle

{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//和binder驱动通信
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}

2.5.13 binder_ioctl

//此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出SM进程对应的porc对象
	struct binder_thread *thread;//SM进程的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回,不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		binder_debug(BINDER_DEBUG_READ_WRITE,
			     "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
			     proc->pid, thread->pid, bwr.write_size, bwr.write_buffer,
			     bwr.read_size, bwr.read_buffer);

		if (bwr.write_size > 0) {
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			trace_binder_write_done(ret);
			/**
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
			*/
		}
		/**
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			trace_binder_read_done(ret);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		*/
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//此处复制到用户空间,只是更新了一下consumed大小
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.5.14 binder_thread_write

1.此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1

//此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))//取出cmd,此时是BC_ACQUIRE
			return -EFAULT;
		ptr += sizeof(uint32_t);
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_INCREFS:
		case BC_ACQUIRE:
		case BC_RELEASE:
		case BC_DECREFS: {
			uint32_t target;
			struct binder_ref *ref;
			const char *debug_string;

			if (get_user(target, (uint32_t __user *)ptr))//取出句柄此时是meidaservice的句柄
				return -EFAULT;
			ptr += sizeof(uint32_t);
			/**
			if (target == 0 && binder_context_mgr_node &&
			    (cmd == BC_INCREFS || cmd == BC_ACQUIRE)) {
				ref = binder_get_ref_for_node(proc,
					       binder_context_mgr_node);
				if (ref->desc != target) {
					binder_user_error("binder: %d:"
						"%d tried to acquire "
						"reference to desc 0, "
						"got %d instead\n",
						proc->pid, thread->pid,
						ref->desc);
				}
			} 
			*/
			else
				ref = binder_get_ref(proc, target);//proc是SM进程的信息,target是meidaservice的句柄,
			//在sm的进程的binder引用的红黑树中,根据句柄,找到binder_ref
			if (ref == NULL) {
				binder_user_error("binder: %d:%d refcou"
					"nt change on invalid ref %d\n",
					proc->pid, thread->pid, target);
				break;
			}
			switch (cmd) {
			case BC_INCREFS:
				debug_string = "IncRefs";
				binder_inc_ref(ref, 0, NULL);
				break;
			case BC_ACQUIRE:
				debug_string = "Acquire";
				binder_inc_ref(ref, 1, NULL);//binder_ref对象引用计数+1
				break;
			}
			binder_debug(BINDER_DEBUG_USER_REFS,
				     "binder: %d:%d %s ref %d desc %d s %d w %d for node %d\n",
				     proc->pid, thread->pid, debug_string, ref->debug_id,
				     ref->desc, ref->strong, ref->weak, ref->node->debug_id);
			break;
		}
		}
		*consumed = ptr - buffer;
	}
	return 0;
}
tatic struct binder_ref *binder_get_ref(struct binder_proc *proc,
					 uint32_t desc)
{
	struct rb_node *n = proc->refs_by_desc.rb_node;
	struct binder_ref *ref;

	while (n) {
		ref = rb_entry(n, struct binder_ref, rb_node_desc);

		if (desc < ref->desc)
			n = n->rb_left;
		else if (desc > ref->desc)
			n = n->rb_right;
		else
			return ref;
	}
	return NULL;
}

1.为meidaservice服务向驱动注册死亡通知,传入一个回调函数

//为meidaservice服务向驱动注册死亡通知,传入一个回调函数
void binder_link_to_death(struct binder_state *bs, uint32_t target, struct binder_death *death)
//参数分析:
//target是meidaservice的句柄
//death是一个结构体,保存了死亡时的回调函数地址
{
    struct {
        uint32_t cmd;
        struct binder_handle_cookie payload;
    } __attribute__((packed)) data;

    data.cmd = BC_REQUEST_DEATH_NOTIFICATION;//请求注册死亡通知
    data.payload.handle = target;//medaservice服务的句柄
    data.payload.cookie = (uintptr_t) death;//包含死亡的回调函数
    binder_write(bs, &data, sizeof(data));
}

死亡通知的回调函数如下:

1.向驱动发送释放的指令,驱动从红黑数中删除。

2.清空svclist列表中对应的handle值。

void svcinfo_death(struct binder_state *bs, void *ptr)
{
    struct svcinfo *si = (struct svcinfo* ) ptr;

    ALOGI("service '%s' died\n", str8(si->name, si->len));
    if (si->handle) {
        binder_release(bs, si->handle);
        si->handle = 0;
    }
}





void binder_release(struct binder_state *bs, uint32_t target)
{
    uint32_t cmd[2];
    cmd[0] = BC_RELEASE;
    cmd[1] = target;
    binder_write(bs, cmd, sizeof(cmd));
}

2.5.16 binder_write

1.此时是分析向驱动注册死亡通知

//此时是分析向驱动注册死亡通知
int binder_write(struct binder_state *bs, void *data, size_t len)
//此时参数:请查看上面data.cmd = BC_REQUEST_DEATH_NOTIFICATION;

{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//和binder驱动通信
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}

2.5.17 binder_ioctl

1.此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1

//此时是分析meidaservice服务的handle对应驱动中的binder_ref的强引用计数+1
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出SM进程对应的porc对象
	struct binder_thread *thread;//SM进程的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回,不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		binder_debug(BINDER_DEBUG_READ_WRITE,
			     "binder: %d:%d write %ld at %08lx, read %ld at %08lx\n",
			     proc->pid, thread->pid, bwr.write_size, bwr.write_buffer,
			     bwr.read_size, bwr.read_buffer);

		if (bwr.write_size > 0) {
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			trace_binder_write_done(ret);
			/**
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
			*/
		}
		/**
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			trace_binder_read_done(ret);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		*/
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//此处复制到用户空间,只是更新了一下consumed大小
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.5.18 binder_thread_write

1.此时分析的是注册死亡通知

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
			//proc是SM进程的信息
			//write_buffer包含的是请求注册死亡通知的信息,有回调函数等
			//write_size大小
			//write_consumed=0
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))//cmd是BC_REQUEST_DEATH_NOTIFICATION,代表请求注册死亡通知
			return -EFAULT;
		ptr += sizeof(uint32_t);
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_REQUEST_DEATH_NOTIFICATION:
		case BC_CLEAR_DEATH_NOTIFICATION: {
			uint32_t target;
			void __user *cookie;
			struct binder_ref *ref;
			struct binder_ref_death *death;

			if (get_user(target, (uint32_t __user *)ptr))//获取medaservice服务的句柄
				return -EFAULT;
			ptr += sizeof(uint32_t);
			if (get_user(cookie, (void __user * __user *)ptr))//获取一个结构体,此结构体包含死亡的回调函数
				return -EFAULT;
			ptr += sizeof(void *);
			ref = binder_get_ref(proc, target);//获取medaservice服务对应的binder_ref引用对象
			if (ref == NULL) {
				binder_user_error("binder: %d:%d %s "
					"invalid ref %d\n",
					proc->pid, thread->pid,
					cmd == BC_REQUEST_DEATH_NOTIFICATION ?
					"BC_REQUEST_DEATH_NOTIFICATION" :
					"BC_CLEAR_DEATH_NOTIFICATION",
					target);
				break;
			}

			binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION,
				     "binder: %d:%d %s %p ref %d desc %d s %d w %d for node %d\n",
				     proc->pid, thread->pid,
				     cmd == BC_REQUEST_DEATH_NOTIFICATION ?
				     "BC_REQUEST_DEATH_NOTIFICATION" :
				     "BC_CLEAR_DEATH_NOTIFICATION",
				     cookie, ref->debug_id, ref->desc,
				     ref->strong, ref->weak, ref->node->debug_id);

			if (cmd == BC_REQUEST_DEATH_NOTIFICATION) {
				if (ref->death) {//此binder引用的死亡通知已经注册
					binder_user_error("binder: %d:%"
						"d BC_REQUEST_DEATH_NOTI"
						"FICATION death notific"
						"ation already set\n",
						proc->pid, thread->pid);
					break;
				}
				death = kzalloc(sizeof(*death), GFP_KERNEL);//分配binder_ref_death内核空间
				if (death == NULL) {
					thread->return_error = BR_ERROR;
					binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
						     "binder: %d:%d "
						     "BC_REQUEST_DEATH_NOTIFICATION failed\n",
						     proc->pid, thread->pid);
					break;
				}
				binder_stats_created(BINDER_STAT_DEATH);
				INIT_LIST_HEAD(&death->work.entry);
				death->cookie = cookie;//cookie中包含死亡的回调函数
				ref->death = death;//将此binder_ref_death保存到meidaservice的binder引用中
				/**
				if (ref->node->proc == NULL) {//如果meidaservice的proc为空,代表service组件死亡。Binder会立即发送通知给Client进程。
					ref->death->work.type = BINDER_WORK_DEAD_BINDER;
					if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
						list_add_tail(&ref->death->work.entry, &thread->todo);
					} else {
						list_add_tail(&ref->death->work.entry, &proc->todo);
						wake_up_interruptible(&proc->wait);
					}
				}
				*/
			}/**
			else {
				if (ref->death == NULL) {
					binder_user_error("binder: %d:%"
						"d BC_CLEAR_DEATH_NOTIFI"
						"CATION death notificat"
						"ion not active\n",
						proc->pid, thread->pid);
					break;
				}
				death = ref->death;
				if (death->cookie != cookie) {
					binder_user_error("binder: %d:%"
						"d BC_CLEAR_DEATH_NOTIFI"
						"CATION death notificat"
						"ion cookie mismatch "
						"%p != %p\n",
						proc->pid, thread->pid,
						death->cookie, cookie);
					break;
				}
				ref->death = NULL;
				if (list_empty(&death->work.entry)) {
					death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION;
					if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) {
						list_add_tail(&death->work.entry, &thread->todo);
					} else {
						list_add_tail(&death->work.entry, &proc->todo);
						wake_up_interruptible(&proc->wait);
					}
				} else {
					BUG_ON(death->work.type != BINDER_WORK_DEAD_BINDER);
					death->work.type = BINDER_WORK_DEAD_BINDER_AND_CLEAR;
				}
			}
			*/
		} break;
		
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

2.6 Servicemanager服务发送reply消息给meidaservice

我们先看一下目前处于的函数调用栈。

binder_parse
    svcmgr_handler
        bio_put_uint32 
    binder_send_reply

svcmgr_handler->bio_put_uint32

2.6.1 bio_put_uint32

1.然后sm会向驱动sm写入reply回复消息,然后唤醒并发送给meidaplay服务。

//此时分析的是sm向驱动写入reply回复消息
void bio_put_uint32(struct binder_io *bio, uint32_t n)
//参数是reply
//n是0
{
    uint32_t *ptr = bio_alloc(bio, sizeof(n));
    if (ptr)
        *ptr = n;//reply赋值为0,代表是sm服务的回复。
}

2.6.2 bio_alloc

//此时分析的是sm向驱动写入reply回复消息
static void *bio_alloc(struct binder_io *bio, size_t size)
{
    size = (size + 3) & (~3);//是4
    if (size > bio->data_avail) {
        bio->flags |= BIO_F_OVERFLOW;
        return NULL;
    } else {
        void *ptr = bio->data;//指向reply的第32位字节
        bio->data += size;//size是4,往后移动四个字节,即第36位字节
        bio->data_avail -= size;// //256-8-4
        return ptr;
    }
}

2.6.3 binder_send_reply

1.此时servicemanager会向驱动写入reply回复消息。

//此时分析的是sm向驱动写入reply回复消息
void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       binder_uintptr_t buffer_to_free,
                       int status)
	//bs是sm进程的信息
	//reply的消息中有一个数据0,代表SM服务
	//txn.transaction_data.data.ptr.buffer,发送方数据的首地址
	//status=0
{
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;//请求释放来自发送方的buffer数据
    data.buffer = buffer_to_free;//指向要释放的buffer的首地址
    data.cmd_reply = BC_REPLY;//设置回复
    data.txn.target.ptr = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
	/**
    if (status) {
        data.txn.flags = TF_STATUS_CODE;
        data.txn.data_size = sizeof(int);
        data.txn.offsets_size = 0;
        data.txn.data.ptr.buffer = (uintptr_t)&status;
        data.txn.data.ptr.offsets = 0;
    }
	*/
	else {
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;//reply->data指向的是第36位,reply->data0指向的是第32位,故大小是4字节,也就是int 0的大小
        data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);//是0,
        data.txn.data.ptr.buffer = (uintptr_t)reply->data0;//buffer的数据首地址指向了第32位
        data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;//数据第0位
    }
    binder_write(bs, &data, sizeof(data));
}

2.6.4 binder_write

此时分析的是sm向驱动写入reply回复消息

1.写入BC_FREE_BUFFER;请求释放来自发送方的buffer数据。

int binder_write(struct binder_state *bs, void *data, size_t len)
//此时参数:请查看上面
//bs是sm的信息
//data数值如下
//len是data的大小
	//data.cmd_free = BC_FREE_BUFFER;//请求释放来自发送方的buffer数据
    //data.buffer = buffer_to_free;//指向要释放的发送方buffer的首地址
    //data.cmd_reply = BC_REPLY;//设置回复
    //data.txn.target.ptr = 0;
    //data.txn.cookie = 0;
    //data.txn.code = 0;
	//data.txn.flags = 0;
    //data.txn.data_size = reply->data - reply->data0;//reply->data指向的是第36位,reply->data0指向的是第32位,故大小是4字节,也就是int 0的大小
    //data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);//是0,
    //data.txn.data.ptr.buffer = (uintptr_t)reply->data0;//buffer的数据首地址指向了第32位,这里面存储了一个0
    //data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;//数据第0位

{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//和binder驱动通信
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}

2.6.5 binder_ioctl

//此时分析的是sm向驱动写入reply回复消息
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出SM进程对应的porc对象
	struct binder_thread *thread;//SM进程的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回,不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}

		if (bwr.write_size > 0) {
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			trace_binder_write_done(ret);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			trace_binder_read_done(ret);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		binder_debug(BINDER_DEBUG_READ_WRITE,
			     "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			     proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
			     bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.6.6 binder_thread_write

此时分析的是sm向驱动写入reply回复消息
1.此时第一次while循环取出第一个命令BC_FREE_BUFFER,用于释放释放meidaservice发送给sm服务的add消息的buffer
2.此时第二次while循环取出第二个命令BC_REPLY,用于回复消息给meidaservice

第一次循环:

//此时第一次while循环取出第一个命令BC_FREE_BUFFER,用于释放释放meidaservice发送给sm服务的add消息的buffer
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
//此时参数:请查看上面
//bs是sm的信息
//data数值如下
//len是data的大小
	//data.cmd_free = BC_FREE_BUFFER;//请求释放来自发送方的buffer数据
    //data.buffer = buffer_to_free;//指向要释放的发送方buffer的首地址
    //data.cmd_reply = BC_REPLY;//设置回复
    //data.txn.target.ptr = 0;
    //data.txn.cookie = 0;
    //data.txn.code = 0;
	//data.txn.flags = 0;
    //data.txn.data_size = reply->data - reply->data0;//reply->data指向的是第36位,reply->data0指向的是第32位,故大小是4字节,也就是int 0的大小
    //data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);//是0,
    //data.txn.data.ptr.buffer = (uintptr_t)reply->data0;//buffer的数据首地址指向了第32位,这里面存储了一个0
    //data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;//数据第0位
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {//死循环读取数据,取出第一个命令
		if (get_user(cmd, (uint32_t __user *)ptr))//此时cmd是BC_FREE_BUFFER,请求释放发送方的缓冲buffer区
			return -EFAULT;
		ptr += sizeof(uint32_t);//指针后移
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_FREE_BUFFER: {//请求释放meidaservice发送给sm服务的add消息的buffer
			void __user *data_ptr;
			struct binder_buffer *buffer;

			if (get_user(data_ptr, (void * __user *)ptr))//获取要释放的buffer地址
				return -EFAULT;
			ptr += sizeof(void *);

			buffer = binder_buffer_lookup(proc, data_ptr);//查询此数据的buffer是否存在
			if (buffer == NULL) {
				binder_user_error("binder: %d:%d "
					"BC_FREE_BUFFER u%p no match\n",
					proc->pid, thread->pid, data_ptr);
				break;
			}
			if (!buffer->allow_user_free) {//表示允许用户释放
				binder_user_error("binder: %d:%d "
					"BC_FREE_BUFFER u%p matched "
					"unreturned buffer\n",
					proc->pid, thread->pid, data_ptr);
				break;
			}
			binder_debug(BINDER_DEBUG_FREE_BUFFER,
				     "binder: %d:%d BC_FREE_BUFFER u%p found buffer %d for %s transaction\n",
				     proc->pid, thread->pid, data_ptr, buffer->debug_id,
				     buffer->transaction ? "active" : "finished");

			if (buffer->transaction) {//清空该缓冲区的事务
				buffer->transaction->buffer = NULL;
				buffer->transaction = NULL;
			}
			/**
			if (buffer->async_transaction && buffer->target_node) {//如果是异步的
				BUG_ON(!buffer->target_node->has_async_transaction);
				if (list_empty(&buffer->target_node->async_todo))
					buffer->target_node->has_async_transaction = 0;
				else
					list_move_tail(buffer->target_node->async_todo.next, &thread->todo);
			}
			*/
			trace_binder_transaction_buffer_release(buffer);
			binder_transaction_buffer_release(proc, buffer, NULL);//释放buffer空间
			binder_free_buf(proc, buffer);//释放buffer空间
			break;
		}
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

第二次循环第二个命令BC_REPLY,用于回复消息给meidaservice:

//此时第二次while循环取出第二个命令BC_REPLY,用于回复消息给客户端
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
//此时参数:请查看上面
//bs是sm的信息
//data数值如下
//len是data的大小
	//data.cmd_free = BC_FREE_BUFFER;//请求释放来自发送方的buffer数据
    //data.buffer = buffer_to_free;//指向要释放的发送方buffer的首地址
    //data.cmd_reply = BC_REPLY;//设置回复
    //data.txn.target.ptr = 0;
    //data.txn.cookie = 0;
    //data.txn.code = 0;
	//data.txn.flags = 0;
    //data.txn.data_size = reply->data - reply->data0;//reply->data指向的是第36位,reply->data0指向的是第32位,故大小是4字节,也就是int 0的大小
    //data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);//是0,
    //data.txn.data.ptr.buffer = (uintptr_t)reply->data0;//buffer的数据首地址指向了第32位,这里面存储了一个0
    //data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;//数据第0位
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))//取出命令BC_REPLY
			return -EFAULT;
		ptr += sizeof(uint32_t);
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_TRANSACTION:
		case BC_REPLY: {
			struct binder_transaction_data tr;

			if (copy_from_user(&tr, ptr, sizeof(tr)))//拷贝的数据
				return -EFAULT;
			ptr += sizeof(tr);
			binder_transaction(proc, thread, &tr, cmd == BC_REPLY);//此时参数为
			//proc是SM进程的proc
			//thread是sm进程的binder线程
			//tr是binder_transaction_data对象
			//cmd是1
			break;
		}
		}
		*consumed = ptr - buffer;
	}
	return 0;
}

2.6.7 binder_transaction

1.首先会找到目标的进程(meidaplaysercice)的proc对象和目标进程的线程。

2.从目标进程(meidaplayservice服务)target_proc中分配一块内核buffer空间,此buffer是映射了的地址空间。

3.拷贝回复消息到此内核空间,即此时就到了meidaplayservice的用户空间了。

然后就分两个流程:

第一个流程是由于往SM的todo队列中插入了未完成事务。故sm服务端要处理此事务。
第一个流程是唤醒meidaservice服务,meidiaservice服务处理reply消息

//此时分析的是sm向驱动写入reply回复消息
static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply)
			//proc是SM进程的proc
			//thread是sm进程的binder线程
			//tr是binder_transaction_data对象,数据内容如下:
				//data.txn.target.ptr = 0;
				//data.txn.cookie = 0;
				//data.txn.code = 0;
				//data.txn.flags = 0;
				//data.txn.data_size = reply->data - reply->data0;//reply->data指向的是第36位,reply->data0指向的是第32位,故大小是4字节,也就是int 0的大小
				//data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);//是0,
				//data.txn.data.ptr.buffer = (uintptr_t)reply->data0;//buffer的数据首地址指向了第32位,这里面存储了一个0
				//data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;//数据第0位
			//reply是1
{
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	size_t *offp, *off_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;
	/**log事务
	e = binder_transaction_log_add(&binder_transaction_log);
	e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
	e->from_proc = proc->pid;
	e->from_thread = thread->pid;
	e->target_handle = tr->target.handle;
	e->data_size = tr->data_size;
	e->offsets_size = tr->offsets_size;
	*/

	if (reply) {//代表是BC_REPLY消息回复
		in_reply_to = thread->transaction_stack;//此事务是meidaservice发送请求sm的add服务的事务。
		//描述Binder进程中通信过程,这个过程称为一个transaction(事务),
		//用以中转请求和返回结果

		binder_set_nice(in_reply_to->saved_priority);//是8
		if (in_reply_to->to_thread != thread) {//此事务是meidaservice发送请求sm的add服务的事务。故此事务发送时的目标线程就是SM的线程
			binder_user_error("binder: %d:%d got reply transaction "
				"with bad transaction stack,"
				" transaction %d has target %d:%d\n",
				proc->pid, thread->pid, in_reply_to->debug_id,
				in_reply_to->to_proc ?
				in_reply_to->to_proc->pid : 0,
				in_reply_to->to_thread ?
				in_reply_to->to_thread->pid : 0);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			goto err_bad_call_stack;
		}
		thread->transaction_stack = in_reply_to->to_parent;//当前SM的binder线程的事务就是此事务。
		target_thread = in_reply_to->from;//即当前reply的目标线程,就是请求方当时消息的源线程,即meidaservice服务的binder线程。

		if (target_thread->transaction_stack != in_reply_to) {//检查是否是同一个事务
			binder_user_error("binder: %d:%d got reply transaction "
				"with bad target transaction stack %d, "
				"expected %d\n",
				proc->pid, thread->pid,
				target_thread->transaction_stack ?
				target_thread->transaction_stack->debug_id : 0,
				in_reply_to->debug_id);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			target_thread = NULL;
			goto err_dead_binder;
		}
		target_proc = target_thread->proc;//reply的目标进程是目标线程的proc,即mediaservice服务的proc对象
	}
	
	if (target_thread) {//如果目标线程不为空,此时是不为空的
		/**e->to_thread = target_thread->pid;*/ //log事务
		target_list = &target_thread->todo;//目标list是meidaservice服务的线程的todo队列
		target_wait = &target_thread->wait;//目标是meidaservice服务的线程的wait队列
	}/**
	else {
		target_list = &target_proc->todo;
		target_wait = &target_proc->wait;
	}*/
	e->to_proc = target_proc->pid;//log

	/* TODO: reuse incoming transaction for reply */
	t = kzalloc(sizeof(*t), GFP_KERNEL);//分配一个新的事务binder_transaction
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	binder_stats_created(BINDER_STAT_TRANSACTION);

	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);//分配创建binder_work,用来描述处理的工作事项
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

	t->debug_id = ++binder_last_id;
	e->debug_id = t->debug_id;

	if (reply)
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "binder: %d:%d BC_REPLY %d -> %d:%d, "
			     "data %p-%p size %zd-%zd\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_thread->pid,
			     tr->data.ptr.buffer, tr->data.ptr.offsets,
			     tr->data_size, tr->offsets_size);
	else
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "binder: %d:%d BC_TRANSACTION %d -> "
			     "%d - node %d, data %p-%p size %zd-%zd\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_node->debug_id,
			     tr->data.ptr.buffer, tr->data.ptr.offsets,
			     tr->data_size, tr->offsets_size);

	if (!reply && !(tr->flags & TF_ONE_WAY))//此时reply是1不执行
		t->from = thread;
	else
		t->from = NULL;//设置from线程为null
	t->sender_euid = proc->tsk->cred->euid;//发送方uid,此时是sm的uid
	t->to_proc = target_proc;//目标的proc是mediaservice服务的proc对象
	t->to_thread = target_thread;//目标线程是mediaservice服务的线程
	t->code = tr->code;//code值是0
	t->flags = tr->flags;//flag是0
	t->priority = task_nice(current);//线程优先级是8

	trace_binder_transaction(reply, t, target_node);

	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));//从目标进程(meidaservice服务)target_proc中分配一块内核buffer空间,此buffer是映射了的地址空间。
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;//不允许用户释放
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;//buffer的运输事务为当前事务
	t->buffer->target_node = target_node;//此时是null
	trace_binder_transaction_alloc_buf(t->buffer);
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (size_t *)(t->buffer->data + ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, tr->data.ptr.buffer, tr->data_size)) {//将buffer拷贝到内核空间,里面的数据只有一个0
		binder_user_error("binder: %d:%d got transaction with invalid "
			"data ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, tr->data.ptr.offsets, tr->offsets_size)) {//拷贝偏移量的起始地址
		binder_user_error("binder: %d:%d got transaction with invalid "
			"offsets ptr\n", proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (!IS_ALIGNED(tr->offsets_size, sizeof(size_t))) {//检查偏移大小是否合法
		binder_user_error("binder: %d:%d got transaction with "
			"invalid offsets size, %zd\n",
			proc->pid, thread->pid, tr->offsets_size);
		return_error = BR_FAILED_REPLY;
		goto err_bad_offset;
	}
	off_end = (void *)offp + tr->offsets_size;//计算偏移的尾端,其实还是首地址,因为回复消息中没有flat_binder_object对象,所以offsets_size是0
	
	if (reply) {//reply是1
		BUG_ON(t->buffer->async_transaction != 0);
		binder_pop_transaction(target_thread, in_reply_to);//将meidaservice服务的发送的add服务的事务从meidaservice线程中删除。
		binder_pop_transaction(target_thread, in_reply_to);
	} 
	/**
	else if (!(t->flags & TF_ONE_WAY)) {
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
	} else {
		BUG_ON(target_node == NULL);
		BUG_ON(t->buffer->async_transaction != 1);
		if (target_node->has_async_transaction) {
			target_list = &target_node->async_todo;
			target_wait = NULL;
		} else
			target_node->has_async_transaction = 1;
	}*/
	t->work.type = BINDER_WORK_TRANSACTION;//业务类型
	list_add_tail(&t->work.entry, target_list);//添加到目标进程(meidaservice服务)的todo队列中
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;//
	list_add_tail(&tcomplete->entry, &thread->todo);//添加一个未完成业务,到sm进程的todo列表中
	if (target_wait)
		wake_up_interruptible(target_wait);//唤醒meidaservice服务的线程
	return;
}

2.7 servicemanager处理未完成事务,并再次陷休眠

2.7.1 binder_loop

我们先看看之前的调用栈

binder_loop
    binder_parse
        binder_send_reply
            binder_write
                binder_ioctl
                    binder_thread_write
                        binder_transaction

故当binder_transaction发送完回复消息后,我们会返回到binder_loop函数的binder_parse一行。

//此时回到ServiceManager此函数,
void binder_loop(struct binder_state *bs, binder_handler func)
//参数分析:bs是存储了binder的三个信息。func是回调函数svcmgr_handler
{
    int res;
    struct binder_write_read bwr;//一个结构体
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;//向binder驱动发送命令协议BC_ENTER_LOOPER,告诉binder驱动"本线程要进入循环状态了"
    binder_write(bs, readbuf, sizeof(uint32_t));//下文有展开。只写入,即BC_ENTER_LOOPER

    for (;;) {//死循环,从驱动中读取消息
        bwr.read_size = sizeof(readbuf);//此时是32字节
        bwr.read_consumed = 0;//
        bwr.read_buffer = (uintptr_t) readbuf;//数据是空

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//无消息时,会阻塞在此处,等待有消息,然后调用binder_parse去解析消息。
        //此时readbuffer有数据了。

        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);//这里
        //参数分析:
        //bs结构体
        //readbuf,readbuf首地址
        //readbuf的消息大小
        //func是回调函数svcmgr_handler
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
}

然后开启下一次循环。由于其todo队列中存在todo队列有一条未完成事务。故此次循环会先处理此未完成的事务。

2.7.2 binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出sm服务进程对应的porc对象
	struct binder_thread *thread;//sm服务进程的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回0,不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {//如果命令是BINDER_WRITE_READ
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {//判断大小
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//从用户空间拷贝数据到内核空间
			ret = -EFAULT;
			goto err;
		}

		if (bwr.write_size > 0) {//此时写大于0
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			trace_binder_write_done(ret);
			if (ret < 0) {
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		if (bwr.read_size > 0) {//此时读的大小大于0,代表我们需要从驱动中读取消息
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			//此函数下面会有大量展开参数分析:
			//proc,meidaservice服务的proc
			//bwr.read_buffer,read_buffer的地址
			//read_size>0
			//read_consumed,代表已消费的字节数0
			//最后一个参数是0
			trace_binder_read_done(ret);
			/**if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}*/
		}
		binder_debug(BINDER_DEBUG_READ_WRITE,
			     "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
			     proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
			     bwr.read_consumed, bwr.read_size);
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.7.3 binder_thread_read

1.第一次循环,首先会往read_buffer中塞入BR_NOOP,然后从todo队列中取出BINDER_WORK_TRANSACTION_COMPLETE事务,转化为BR_TRANSACTION_COMPLETE,并放入read_buffer中。


2.第二次循环,检查todo队列中是否还存在事务。此时不存在,则返回read_buffer,返回

 binder_ioctl函数,此函数会拷贝数据到用户空间。

第一次循环:

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      void  __user *buffer, int size,
			      signed long *consumed, int non_block)
			//参数分析:
			//proc,sm服务的proc
			//buffer=bwr.read_buffer,read_buffer的地址
			//size=read_size>0
			//read_consumed,代表已消费的字节数0
			//non_block是0,代表阻塞
{
	void __user *ptr = buffer + *consumed;//指向首地址
	void __user *end = buffer + size;//指向尾端地址

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
		if (put_user(BR_NOOP, (uint32_t __user *)ptr))//往用户指向的空间里面放一个BR_NOOP
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL &&
				list_empty(&thread->todo);//此时sm线程的todo队列中有一个未完成的事务
	/**
	if (thread->return_error != BR_OK && ptr < end) {
		if (thread->return_error2 != BR_OK) {
			if (put_user(thread->return_error2, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			binder_stat_br(proc, thread, thread->return_error2);
			if (ptr == end)
				goto done;
			thread->return_error2 = BR_OK;
		}
		if (put_user(thread->return_error, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		binder_stat_br(proc, thread, thread->return_error);
		thread->return_error = BR_OK;
		goto done;
	}
	*/


	thread->looper |= BINDER_LOOPER_STATE_WAITING;
	/**
	if (wait_for_proc_work)//此时不为真,因为队列中有数据
		proc->ready_threads++;

	binder_unlock(__func__);

	trace_binder_wait_for_work(wait_for_proc_work,
				   !!thread->transaction_stack,
				   !list_empty(&thread->todo));
	if (wait_for_proc_work) {
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
					BINDER_LOOPER_STATE_ENTERED))) {
			binder_user_error("binder: %d:%d ERROR: Thread waiting "
				"for process work before calling BC_REGISTER_"
				"LOOPER or BC_ENTER_LOOPER (state %x)\n",
				proc->pid, thread->pid, thread->looper);
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);
		}
		binder_set_nice(proc->default_priority);
		if (non_block) {
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	}*/
	else {
		if (non_block) {//如果非阻塞
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));//此时thread中有消息是不会阻塞的
	}

	binder_lock(__func__);
	/**
	if (wait_for_proc_work)
		proc->ready_threads--;
	*/
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;//取消当前线程正在等待的标志

	/**if (ret)
		return ret;*/

	while (1) {
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);//取出BINDER_WORK_TRANSACTION_COMPLETE事务
		/**
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added 
				goto retry;
			break;
		}*/
		/**
		if (end - ptr < sizeof(tr) + 4)
			break;
		*/

		switch (w->type) {
		/**
		case BINDER_WORK_TRANSACTION: {
			t = container_of(w, struct binder_transaction, work);
		} break;
		*/
		case BINDER_WORK_TRANSACTION_COMPLETE: {//此时走这里。
			cmd = BR_TRANSACTION_COMPLETE;//生成BR_TRANSACTION_COMPLETE
			if (put_user(cmd, (uint32_t __user *)ptr))//将此命令放入用户空间中,此时有两个命令BR_TRANSACTION_COMPLETE和BR_NOOP
				return -EFAULT;
			ptr += sizeof(uint32_t);

			binder_stat_br(proc, thread, cmd);
			binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
				     "binder: %d:%d BR_TRANSACTION_COMPLETE\n",
				     proc->pid, thread->pid);

			list_del(&w->entry);//删除w代表的BINDER_WORK_TRANSACTION_COMPLETE事务,因为此时已经用完了
			kfree(w);//释放w的空间
			binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
		} break;
		}

		if (!t)
			continue;//开启下一次循环
			.....
}

第二次循环:

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      void  __user *buffer, int size,
			      signed long *consumed, int non_block)
{

	while (1) {//第二个循环
		uint32_t cmd;
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;

		/**
		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		*/
		else {//此时走这里
			/**不执行
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added 
				goto retry;
			break;
			*/
		}

		if (end - ptr < sizeof(tr) + 4)//根据书上的流程,此时应该走这里,退出循环,因为此时end是readbuffer的尾地址
		//而ptr是第8个字节的位置,里面有两个命令分别是BR_TRANSACTION_COMPLETE和BR_NOOP
			break;
	}

done:

	*consumed = ptr - buffer;//值是8,里面是两个命令BR_NOOP和BR_TRANSACTION_COMPLETE和
	return 0;
}

2.7.4 binder_parse

1.第一次循环解析数据BR_NOOP
2.第二次循环解析数据BR_TRANSACTION_COMPLETE

第一次循环:

//第一次循环解析数据BR_NOOP
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
                 //参数分析:
                //bs结构体
                //bio是0
                //readbuf,readbuf首地址
                //readbuf的消息大小
                //func是回调函数svcmgr_handler
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;//计算数据尾地址

    while (ptr < end) {//死循环取数据
        uint32_t cmd = *(uint32_t *) ptr;//从buffer取出cmd,此时第一个是BR_NOOP,第二个是BR_TRANSACTION
        ptr += sizeof(uint32_t);//指针位置后移
#if TRACE
        fprintf(stderr,"%s:\n", cmd_name(cmd));
#endif
        switch(cmd) {
        case BR_NOOP:
            break;
        }
    }

    /**return r;此时并没有返回,要处理第二个数据*/
}

第二次循环解析数据BR_TRANSACTION_COMPLETE

int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;

    while (ptr < end) {//第二次死循环取数据BR_TRANSACTION_COMPLETE
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
#if TRACE
        fprintf(stderr,"%s:\n", cmd_name(cmd));
#endif
        switch(cmd) {
        case BR_TRANSACTION_COMPLETE:
            break;
        }
    }
    return r;
}

然后返回到 binder_loop函数,此时无消息,servicemanager便会继续阻塞休眠,等待来自驱动的消息。

2.7.5 sm主线程再次休眠,等待来自驱动的消息

//此时回到ServiceManager此函数,
void binder_loop(struct binder_state *bs, binder_handler func)
//参数分析:bs是存储了binder的三个信息。func是回调函数svcmgr_handler
{
    int res;
    struct binder_write_read bwr;//一个结构体
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;//向binder驱动发送命令协议BC_ENTER_LOOPER,告诉binder驱动"本线程要进入循环状态了"
    binder_write(bs, readbuf, sizeof(uint32_t));//下文有展开。只写入,即BC_ENTER_LOOPER

    for (;;) {//第三次死循环,从驱动中读取消息
        bwr.read_size = sizeof(readbuf);//此时是32字节
        bwr.read_consumed = 0;//
        bwr.read_buffer = (uintptr_t) readbuf;//数据是空

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//阻塞在此处
        /*
        if (res < 0) {
            ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
            break;
        }

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);//    
        if (res == 0) {
            ALOGE("binder_loop: unexpected reply?!\n");
            break;
        }
        if (res < 0) {
            ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
            break;
        }
    }
    */
}

2.8 唤醒meidaservice服务,meida服务处理reply消息

在2.6.7 binder_transaction的最后,我们知道servicemanager服务往mediaplayservice服务的todo队列中,插入了回复消息,并且唤醒了mediaplayservice服务。此时我们需要回到meidaservice阻塞的地方2.4.7 binder_thread_read。

我们先回顾一下阻塞前的调用栈

MediaPlayerService::instantiate
    IServiceManager::addService
        BpBinder::transact
            IPCThreadState::transact
                waitForResponse
                    talkWithDriver
                        binder_ioctl
                            binder_thread_read

2.8.1 binder_thread_read

主要作用为:

1.当mediaplayservice被唤醒后,则会从tod队列中取出获取binder_transaction_data消息。并写入

BR_REPLY。

2.将binder_transaction_data数据拷贝到用户空间。注意此处只是执行了拷贝命令,但没有拷贝数据,binder只有一次数据拷贝。

3.删除发送给sm进程的add服务的事务,避免内存泄露。

//mediaservice处理回复消息
static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  void  __user *buffer, int size,
                  signed long *consumed, int non_block)
         //参数分析:
         //proc是media是mediaservice服务的proc
         //thread是meidaservice服务的线程
         //buffer指向read_buffer,读的首地址
         //read_size>0
         //read_consumed是0
         //non_block是0,表示是阻塞的
{
    void __user *ptr = buffer + *consumed;//指向buffer首地址
    void __user *end = buffer + size;//指向尾地址

    int ret = 0;
    int wait_for_proc_work;

    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))//向里面放一个BR_NOOP命令
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }

retry:
    wait_for_proc_work = thread->transaction_stack == NULL &&
                list_empty(&thread->todo);//此时当前线程的运输事务不空,即transaction_stack不为空
/*
    if (thread->return_error != BR_OK && ptr < end) {
        if (thread->return_error2 != BR_OK) {
            if (put_user(thread->return_error2, (uint32_t __user *)ptr))
                return -EFAULT;
            ptr += sizeof(uint32_t);
            binder_stat_br(proc, thread, thread->return_error2);
            if (ptr == end)
                goto done;
            thread->return_error2 = BR_OK;
        }
        if (put_user(thread->return_error, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        binder_stat_br(proc, thread, thread->return_error);
        thread->return_error = BR_OK;
        goto done;
    }
 */


    thread->looper |= BINDER_LOOPER_STATE_WAITING;//设置等待的flag
    /*if (wait_for_proc_work)//不执行
        proc->ready_threads++;*/

    binder_unlock(__func__);


    /*if (wait_for_proc_work) {//wait_for_proc_work是false,不执行
        if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
                    BINDER_LOOPER_STATE_ENTERED))) {

            wait_event_interruptible(binder_user_error_wait,
                         binder_stop_on_user_error < 2);
        }
        binder_set_nice(proc->default_priority);
        if (non_block) {
            if (!binder_has_proc_work(proc, thread))
                ret = -EAGAIN;
        } else
            ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } */
 else {//走这里
        /*if (non_block) {不执行
            if (!binder_has_thread_work(thread))
                ret = -EAGAIN;
        } */
  else
            ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));//此时不再阻塞
    }


    binder_lock(__func__);

    /*if (wait_for_proc_work)
        proc->ready_threads--;*/
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    /*if (ret)
        return ret;*/

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        if (!list_empty(&thread->todo))
            w = list_first_entry(&thread->todo, struct binder_work, entry);//从队列中取出回复的消息事务
       /*
        else if (!list_empty(&proc->todo) && wait_for_proc_work)
            w = list_first_entry(&proc->todo, struct binder_work, entry);
        else {
            if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) 
                goto retry;
            break;
        }*/

        /*if (end - ptr < sizeof(tr) + 4)
            break;*/

        switch (w->type) {
        case BINDER_WORK_TRANSACTION: {
            t = container_of(w, struct binder_transaction, work);
        } break;
          
        

        
        }

        /*if (!t)
            continue;*/

        BUG_ON(t->buffer == NULL);
         /*
        if (t->buffer->target_node) {
            struct binder_node *target_node = t->buffer->target_node;//此时target_node是null
            tr.target.ptr = target_node->ptr;//为空
            tr.cookie =  target_node->cookie;//为空
            t->saved_priority = task_nice(current);//8
            if (t->priority < target_node->min_priority &&
                !(t->flags & TF_ONE_WAY))
                binder_set_nice(t->priority);
            else if (!(t->flags & TF_ONE_WAY) ||
                 t->saved_priority > target_node->min_priority)
                binder_set_nice(target_node->min_priority);
            cmd = BR_TRANSACTION;
        }*/ 
      else {
            tr.target.ptr = NULL;
            tr.cookie = NULL;
            cmd = BR_REPLY;//设置cmd是BR_REPLY
        }
        tr.code = t->code;//code值是0
        tr.flags = t->flags;//flag值也是0
        tr.sender_euid = t->sender_euid;//是sm的uid

        if (t->from) {//是sm
            struct task_struct *sender = t->from->proc->tsk;//sm服务的pid
            tr.sender_pid = task_tgid_nr_ns(sender,
                            current->nsproxy->pid_ns);
        } else {
            tr.sender_pid = 0;
        }

        tr.data_size = t->buffer->data_size;//buffer数据区大小
        tr.offsets_size = t->buffer->offsets_size;//offset数据区大小
        tr.data.ptr.buffer = (void *)t->buffer->data +
                    proc->user_buffer_offset;//内核空间+用户空间偏移量就是用户空间buffer地址
        tr.data.ptr.offsets = tr.data.ptr.buffer +
                    ALIGN(t->buffer->data_size,
                        sizeof(void *));

        if (put_user(cmd, (uint32_t __user *)ptr))//放入cmd是BR_REPLY
            return -EFAULT;
        ptr += sizeof(uint32_t);
        if (copy_to_user(ptr, &tr, sizeof(tr)))//拷贝数据到用户空间
            return -EFAULT;
        ptr += sizeof(tr);

        trace_binder_transaction_received(t);
        binder_stat_br(proc, thread, cmd);


        list_del(&t->work.entry);//删除事务
        t->buffer->allow_user_free = 1;
      /*
        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
            t->to_parent = thread->transaction_stack;
            t->to_thread = thread;
            thread->transaction_stack = t;
        } */
      else {
            t->buffer->transaction = NULL;//运输事务变成空
            kfree(t);//释放事务
            binder_stats_deleted(BINDER_STAT_TRANSACTION);
            }
        break;
    }

done:

    *consumed = ptr - buffer;//reply的大小
 /*
    if (proc->requested_threads + proc->ready_threads == 0 &&
        proc->requested_threads_started < proc->max_threads &&
        (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
         BINDER_LOOPER_STATE_ENTERED))) {
        proc->requested_threads++;
        binder_debug(BINDER_DEBUG_THREADS,
                 "binder: %d:%d BR_SPAWN_LOOPER\n",
                 proc->pid, thread->pid);
        if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
            return -EFAULT;
        binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
    }
 */
    return 0;
 
}

2.8.2 binder_ioctl

根据调用栈,我们返回binder_ioctl。

1.再次拷贝数据到用户空间,注意此处只是执行了拷贝命令,但没有拷贝数据,binder只有一次数据拷贝。

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;//取出meidaservice进程对应的porc对象
    struct binder_thread *thread;//meidaservice进程的binder线程
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

    trace_binder_ioctl(cmd, arg);

    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回,不休眠
    if (ret)
        goto err_unlocked;

    binder_lock(__func__);
    thread = binder_get_thread(proc);//获取binder_thread
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }

    switch (cmd) {
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
        if (size != sizeof(struct binder_write_read)) {//计算数据是否和规范
            ret = -EINVAL;
            goto err;
        }
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//拷贝数据到内核空间
            ret = -EFAULT;
            goto err;
        }


        /*if (bwr.write_size > 0) {//此时write_siz=0,不执行
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
            trace_binder_write_done(ret);
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
      */
        if (bwr.read_size > 0) {//此时read_size>0
            ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
            //此时meidaservice的binder线程收到reply后,继续执行
            /*
            if (!list_empty(&proc->todo))
                wake_up_interruptible(&proc->wait);
            if (ret < 0) {
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
             */
        }

        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//拷贝reply消息到用户空间
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    




    }
    ret = 0;
err:
    if (thread)
        thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
    binder_unlock(__func__);
    wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret && ret != -ERESTARTSYS)
        printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
    trace_binder_ioctl_done(ret);
    return ret;
 
}

2.8.3 talkWithDriver

然后我们需要再次返回talkWithDriver。

1.从驱动中读取到消息后,设置read的位置。

status_t IPCThreadState::talkWithDriver(bool doReceive)//默认doReceive是true
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//needRead是true

    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;//write_size是0
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {//需要读取数据
        bwr.read_size = mIn.dataCapacity();//设置需要读取的大小。
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
     /*else {//不执行
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/


    // 不会执行
    //if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//从驱动中读取消息,在这里面
        //收到reply消息
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
    } while (err == -EINTR);


    if (err >= NO_ERROR) {
       /* if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }*/
        if (bwr.read_consumed > 0) {//此时read区中有数据
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);//设置起始位置
        }

        return NO_ERROR;
    }
    return err;
}

2.8.4 waitForResponse

此时我们再次返回waitForResponse函数。

此时有两个命令,分别是BR_NOOP命令和BR_REPLY

1.第一次循环取出BR_NOOP命令,啥也没干,就不在叙述。

2.第二次循环取出BR_REPLY命令

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;//有数据不执行

        cmd = (uint32_t)mIn.readInt32();//取出BR_REPLY



        switch (cmd) {
 

 

        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {//走这里
                    if ((tr.flags & TF_STATUS_CODE) == 0) {//flags是0,所以走这里
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                            
                    } 
                    /*
                    else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(nullptr,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }*/
                } 
                /*else {
                    freeBuffer(nullptr,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }*/
            }
            goto finish;

        
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

2.8.5 Parcel::ipcSetDataReference

1.此时接收的是reply(0),主要作用是将数据填充到reply中。

然后此函数返回到IPCThreadState::transact 再返回到BpBinder::transact ,再返回到SM的注册服务IServiceManager::addService,再返回到MediaPlayerService::instantiate()。都没有对此reply做任何处理。原因是因为meidaservice服务端不需要此来自sm注册服务的reply,一般reply是meida的客户端请求meidaservice服务端,需要接收一个handle,所以才需要处理。

void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
    const binder_size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
    //参数分析
    //data指向的buffer就只有一个数据0
    //dataSize四字节
    //objects为空
    //objectsCount为0
    //relFunc是freeBuffer函数
    //this
{
    binder_size_t minOffset = 0;
    freeDataNoInit();
    mError = NO_ERROR;
    mData = const_cast<uint8_t*>(data);//存储的是0
    mDataSize = mDataCapacity = dataSize;
    //ALOGI("setDataReference Setting data size of %p to %lu (pid=%d)", this, mDataSize, getpid());
    mDataPos = 0;
    ALOGV("setDataReference Setting data pos of %p to %zu", this, mDataPos);
    mObjects = const_cast<binder_size_t*>(objects);//为空
    mObjectsSize = mObjectsCapacity = objectsCount;//0
    mNextObjectHint = 0;
    mObjectsSorted = false;
    mOwner = relFunc;
    mOwnerCookie = relCookie;
    /*
    for (size_t i = 0; i < mObjectsSize; i++) {
        binder_size_t offset = mObjects[i];
        if (offset < minOffset) {
            ALOGE("%s: bad object offset %" PRIu64 " < %" PRIu64 "\n",
                  __func__, (uint64_t)offset, (uint64_t)minOffset);
            mObjectsSize = 0;
            break;
        }
     */
        minOffset = offset + sizeof(flat_binder_object);//0
    }
    scanForFds();
}

2.9 ProcessState::startThreadPool

主要作用为:

1.创建一个新的线程,然后向驱动消息此线程进入循环。同时此线程死循环的去接受来自驱动的消息。

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {//默认是false
        mThreadPoolStarted = true;//设置true,表示线程池已经启动了,不要再重复启动
        spawnPooledThread(true);//开启线程池
}

2.9.1 ProcessState::spawnPooledThread

1.创建了一个PoolThread对象,本质上创建了一个新的线程,线程调用run方法后,会调用threadLoop方法,当其返回true并且没有调用requsetexit函数时,会一直循环的调用threadLoop函数

此处涉及线程机制篇,本人会另写一篇。

void ProcessState::spawnPooledThread(bool isMain)
//isMain是true
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();//根据pid获取binder线程池的名字
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
        sp<Thread> t = new PoolThread(isMain);//main传入的是true
        t->run(name.string());
    }
}
String8 ProcessState::makeBinderThreadName() {
    int32_t s = android_atomic_add(1, &mThreadPoolSeq);//原子操作+1,此时mThreadPoolSeq=2,返回s=1
    pid_t pid = getpid();//获取进程pid
    String8 name;
    name.appendFormat("Binder:%d_%X", pid, s);//根据pid获取名称
    return name;
}
class PoolThread : public Thread//继承自Thread类,此类请查看本人另外分析的线程篇
{
public:
    explicit PoolThread(bool isMain)
        : mIsMain(isMain)
    {
    }
    
protected:
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);
        return false;
    }
    
    const bool mIsMain;
};

2.9.2 threadLoop

//此处请查看本人写的安卓Thread篇。需要知道的是会执行threadLoop函数。
//线程调用run方法后,会调用threadLoop方法,当其返回true并且没有调用requsetexit函数时,会一直循环的调用threadLoop函数
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);
        return false;
    }

2.9.3 IPCThreadState::joinThreadPool

1.向驱动发送BC_ENTER_LOOPER指令。

2.死循环中,不断的接受来自其他客户端对meidaservice服务的请求。

void IPCThreadState::joinThreadPool(bool isMain)//isMain是true
{
    
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);//向mOut中写入BC_ENTER_LOOPER指令
    //对于isMain=true的情况下, command为BC_ENTER_LOOPER,代表的是Binder主线程,不会退出的线程;
    //对于isMain=false的情况下,command为BC_REGISTER_LOOPER,表示是由binder驱动创建的线程。

    status_t result;
    do {
        processPendingDerefs();//处理一些引用计数
        
        result = getAndExecuteCommand();//等待来自其他客户端对meidaservice服务的请求。


        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }

    
        if(result == TIMED_OUT && !isMain) {//如果超时并且不是主线程,则退出循环
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);//死循环一直等待获取消息

    LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%d\n",
        (void*)pthread_self(), getpid(), result);

    mOut.writeInt32(BC_EXIT_LOOPER);//如果退出了循环,则告诉驱动要退出循环。
    talkWithDriver(false);
}
void IPCThreadState::processPendingDerefs()
{
    if (mIn.dataPosition() >= mIn.dataSize()) {//如果min中无数据
       
        while (mPendingWeakDerefs.size() > 0 || mPendingStrongDerefs.size() > 0) {//mPendingWeakDerefs
        //存储的是要销毁的弱引用,所以必须保证min中无数据,才能销毁。
        //mPendingStrongDerefs存储的是BBbinder实体
            while (mPendingWeakDerefs.size() > 0) {
                RefBase::weakref_type* refs = mPendingWeakDerefs[0];
                mPendingWeakDerefs.removeAt(0);
                refs->decWeak(mProcess.get());//handle弱引用减1
            }

            if (mPendingStrongDerefs.size() > 0) {

                BBinder* obj = mPendingStrongDerefs[0];//BBbinder实体引用减少1,一般是同进程才会有实体的存储。
                mPendingStrongDerefs.removeAt(0);
                obj->decStrong(mProcess.get());
            }
        }
    }
}

2.9.4 IPCThreadState::getAndExecuteCommand

1.此时作用为,用于向驱动发送BC_ENTER_LOOPER消息

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();//先将驱动发送BC_ENTER_LOOPER,将自己注册到Binder线程池中
    //然后会阻塞在read中,等待消息的读取
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();//如果有消息
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();//读取cmd
        


        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;//正在执行的线程数+1
        if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs == 0) {//不能超出线程池上限
            mProcess->mStarvationStartTimeMs = uptimeMillis();
        }
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        result = executeCommand(cmd);//执行获取来的命令

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;//正在执行的线程数-1
        if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs != 0) {
            int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
            if (starvationTimeMs > 100) {
                ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
                      mProcess->mMaxThreads, starvationTimeMs);
            }
            mProcess->mStarvationStartTimeMs = 0;
        }
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
    }

    return result;
}

2.9.5 IPCThreadState::talkWithDriver

status_t IPCThreadState::talkWithDriver(bool doReceive)//doReceive默认是true
{
	/**
    if (mProcess->mDriverFD < 0) {
        return -EBADF;
    }
	*/

    binder_write_read bwr;
	//struct binder_write_read {
	//binder_size_t		write_size;//要写入的字节数,write_buffer的总字节数
	//binder_size_t		write_consumed;//驱动程序占用的字节数,write_buffer已消费的字节数
	//binder_uintptr_t	write_buffer;//写缓冲数据的指针
	//binder_size_t		read_size;//要读的字节数,read_buffer的总字节数
	//binder_size_t		read_consumed;//驱动程序占用的字节数,read_buffer已消费的字节数
	//binder_uintptr_t	read_buffer;//读缓存数据的指针
	//};

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//此时needRead是ture
	//因为mIn.dataPosition返回值和mIn.dataSize相等


	//当我们正在从min中读取数据,或者调用方打算读取下一条数据(doReceive为true时),我们不会写入任何内容。
	//此时outAvail值等于mOut.dataSize()
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;//write_size是mOut.dataSize()
    bwr.write_buffer = (uintptr_t)mOut.data();

	
    if (doReceive && needRead) {//当我们需要从驱动中读的时候。
        bwr.read_size = mIn.dataCapacity();//设置大小为256字节
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
	/*
	else {//当不读的时候,设置读的大小和buffer为0
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/
	

    // 如果读缓冲区和写缓冲区都为0,代表无事可做,立即返回,此时write_size中有数据。
	/**
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
	*/

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//向binder驱动区写,因为此时是注册当前线程成为binder主looper线程,所以mout有数据
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
		/**
        if (mProcess->mDriverFD < 0) {
            err = -EBADF;
        }
		*/
    } while (err == -EINTR);


    if (err >= NO_ERROR) {//代表驱动收到了消息
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())//如果驱动消费的数据大小小于mout的大小,则说明驱动没有消费mout数据
                LOG_ALWAYS_FATAL("Driver did not consume write buffer. "
                                 "err: %s consumed: %zu of %zu",
                                 statusToString(err).c_str(),
                                 (size_t)bwr.write_consumed,
                                 mOut.dataSize());
            else {//代表mout被正确消费
                mOut.setDataSize(0);//重置数据大小为0
                processPostWriteDerefs();//主要是将写的引用计数减少1,释放
            }
        }
		/**
        if (bwr.read_consumed > 0) {//如果驱动读的数据大小大于0
            mIn.setDataSize(bwr.read_consumed);//设置mIn的大小
            mIn.setDataPosition(0);//设置min数据起始位置
        }
		*/
        return NO_ERROR;
    }

    ///return err;
}

2.9.6 binder_ioctl

//此时数据是BC_ENTER_LOOPER,cmd是BINDER_WRITE_READ
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出此fd的proc对象
	struct binder_thread *thread;//此sm进程对应的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//是bwr

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取此proc的binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {//查看大小是否正确
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//从用户空间复制数据到内核空间
		//第一个参数to是内核空间的数据目标地址指针,
		//第二个参数from是用户空间的数据源地址指针,
		//第三个参数n是数据的长度。
			ret = -EFAULT;
			goto err;
		}

		if (bwr.write_size > 0) {//当写缓存有数据的时候,执行写操作
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			//参数分析:
			//proc代表sm对象的proc
			//thread为此sm进程的binder线程
			//bwr.write_buffer,内核数据的起始地址
			//write_size,数据大小
			//write_consumed,驱动程序已消费的数据大小
			trace_binder_write_done(ret);
			/**
			if (ret < 0) {//如果写失败,再将bwr的值写回给ubuf
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					//第一个参数是用户空间的指针,
					//第二个参数是内核空间指针,
					//n表示从内核空间向用户空间拷贝数据的字节数
					ret = -EFAULT;
				goto err;
			}
			*/
		}
		/**
		if (bwr.read_size > 0) {//当读缓存有数据的时候,执行读操作,此时读缓存无数据
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			trace_binder_read_done(ret);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		*/
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//将此内核空间数据,拷贝到ubuf中,此时是写的消费的大小write_consumed从变成了4字节。
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.9.7 binder_thread_write

1.驱动侧为当前线程设置已经looper,表示已经准备好循环接受驱动消息。

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
			//参数分析:
			//proc代表sm对象的proc
			//thread为此sm进程的binder线程
			//bwr.write_buffer,内核数据的起始地址,数据是BC_ENTER_LOOPER
			//write_size,4字节,数据大小
			//consumed=0,驱动程序已消费的数据大小
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;//首地址+0,还是写buffer首地址。
	void __user *end = buffer + size;//buffer的尾地址。

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))//从写buffer中获取命令给cmd,即此时是BC_ENTER_LOOPER
			return -EFAULT;
		ptr += sizeof(uint32_t);//让buffer的地址跳过BC_ENTER_LOOPER,因为buffer中可能还有其他数据。此时是没数据了
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {//记录信息
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_ENTER_LOOPER:
		/**
			if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {//如果此looper已经注册过,则错误
				thread->looper |= BINDER_LOOPER_STATE_INVALID;
				binder_user_error("binder: %d:%d ERROR:"
					" BC_ENTER_LOOPER called after "
					"BC_REGISTER_LOOPER\n",
					proc->pid, thread->pid);
			}
			*/
			thread->looper |= BINDER_LOOPER_STATE_ENTERED;//设置为此binder线程已经注册过了。
			break;
		}
		*consumed = ptr - buffer;//已消费的字节大小,此时为4字节
	}
	return 0;
}

2.9.8 死循环用于获取来自驱动的消息

1.当向驱动发送完BC_ENTER_LOOPER的消息后,会再走到IPCThreadState::joinThreadPool函数的死循环处。

2.getAndExecuteCommand会执行talkWithDriver函数,此函数上文已多次出现,然后会阻塞,等待消息的来临

void IPCThreadState::joinThreadPool(bool isMain)//isMain是true
{
    do {
        processPendingDerefs();//处理一些引用计数
        
        result = getAndExecuteCommand();//等待来自其他客户端对meidaservice服务的请求。


        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }

    
        if(result == TIMED_OUT && !isMain) {//如果超时并且不是主线程,则退出循环
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);//死循环一直等待获取消息
}

 2.9.9 getAndExecuteCommand

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();//此时会阻塞在read中,等待消息的读取
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();//如果有消息
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();//读取cmd
        


        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;//正在执行的线程数+1
        if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs == 0) {//不能超出线程池上限
            mProcess->mStarvationStartTimeMs = uptimeMillis();
        }
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        result = executeCommand(cmd);//执行获取来的命令

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;//正在执行的线程数-1
        if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs != 0) {
            int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
            if (starvationTimeMs > 100) {
                ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
                      mProcess->mMaxThreads, starvationTimeMs);
            }
            mProcess->mStarvationStartTimeMs = 0;
        }
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
    }

    return result;
}

2.9.10 talkWithDriver

1.此时当前专门用于ipc通信的线程会阻塞,等待来自驱动的消息。

status_t IPCThreadState::talkWithDriver(bool doReceive)//默认doReceive是true
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//此时needRead是true

    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;//此时outAvail是0

    bwr.write_size = outAvail;//write_size是0
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {//需要读取数据
        bwr.read_size = mIn.dataCapacity();//设置需要读取的大小。
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
     /*else {//不执行
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/



    // 不会执行
    //if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {

#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//从驱动中读取消息,在这里面
        //会线程休眠
            err = NO_ERROR;
        else
            err = -errno;
		/*
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }

    return err;
	*/
}

2.9.11 binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;//取出meidaservice进程对应的porc对象
    struct binder_thread *thread;//meidaservice进进程的binder线程
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

    trace_binder_ioctl(cmd, arg);

    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回,不休眠
    if (ret)
        goto err_unlocked;

    binder_lock(__func__);
    thread = binder_get_thread(proc);//获取binder_thread
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }

    switch (cmd) {
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
        if (size != sizeof(struct binder_write_read)) {//计算数据是否和规范
            ret = -EINVAL;
            goto err;
        }
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//拷贝数据到内核空间
            ret = -EFAULT;
            goto err;
        }


        /*if (bwr.write_size > 0) {//此时write_siz=0,不执行
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
            trace_binder_write_done(ret);
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
      */
        if (bwr.read_size > 0) {//此时read_size>0
            ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
            //此时meidaservice的binder线程会阻塞在这里。后面暂不执行,等待唤醒时,才会执行
            /*
            if (!list_empty(&proc->todo))
                wake_up_interruptible(&proc->wait);
            if (ret < 0) {
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        binder_debug(BINDER_DEBUG_READ_WRITE,
                 "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
                 proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
                 bwr.read_consumed, bwr.read_size);
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    




    }
    ret = 0;
err:
    if (thread)
        thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
    binder_unlock(__func__);
    wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret && ret != -ERESTARTSYS)
        printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
    trace_binder_ioctl_done(ret);
    return ret;
 */
}

2.9.12 binder_thread_read

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  void  __user *buffer, int size,
                  signed long *consumed, int non_block)
         //参数分析:
         //proc是media是mediaservice服务的proc
         //thread是meidaservice服务的线程
         //buffer指向read_buffer,读的首地址
         //read_size>0
         //read_consumed是0
         //non_block是0,表示是阻塞的
{
    void __user *ptr = buffer + *consumed;//指向buffer首地址
    void __user *end = buffer + size;//指向尾地址

    int ret = 0;
    int wait_for_proc_work;

    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))//向里面放一个BR_NOOP命令
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }

retry:
    wait_for_proc_work = thread->transaction_stack == NULL &&
                list_empty(&thread->todo);//此时当前线程的运输事务不空,即transaction_stack不为空



    thread->looper |= BINDER_LOOPER_STATE_WAITING;//设置等待的flag
    /*if (wait_for_proc_work)//不执行
        proc->ready_threads++;*/

    binder_unlock(__func__);


    /*if (wait_for_proc_work) {//wait_for_proc_work是false,不执行
        if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
                    BINDER_LOOPER_STATE_ENTERED))) {

            wait_event_interruptible(binder_user_error_wait,
                         binder_stop_on_user_error < 2);
        }
        binder_set_nice(proc->default_priority);
        if (non_block) {
            if (!binder_has_proc_work(proc, thread))
                ret = -EAGAIN;
        } else
            ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } */
 else {//走这里
        /*if (non_block) {不执行
            if (!binder_has_thread_work(thread))
                ret = -EAGAIN;
        } */
  else
            ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));//meidaservice线程陷入休眠
    }
//后面都不执行,故省略.....
}

2.10 joinThreadPool

此处全流程和2.9.3到2.9.12是一模一样的。

那么为什么执行此步呢?

答:因为开启线程池startThreadPool函数是创建了另一个线程,是异步的,而当前的meidaplayservice是主线程,1如果main函数所在的主线程会退出,而导致所有的子线程退出。

所以meidaplayservice主线程不能推出,需要将主线程也阻塞,加入到binder线程池中,因此此处便存在两个binder线程,都用于处于来自驱动的消息。提高binder的效率。main也不会结束。

2.10.1 IPCThreadState::joinThreadPool

1.向驱动发送BC_ENTER_LOOPER指令。

2.死循环中,不断的接受来自其他客户端对meidaservice服务的请求。

void IPCThreadState::joinThreadPool(bool isMain)//isMain是true
{
    
    mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER);//向mOut中写入BC_ENTER_LOOPER指令
    //对于isMain=true的情况下, command为BC_ENTER_LOOPER,代表的是Binder主线程,不会退出的线程;
    //对于isMain=false的情况下,command为BC_REGISTER_LOOPER,表示是由binder驱动创建的线程。

    status_t result;
    do {
        processPendingDerefs();//处理一些引用计数
        
        result = getAndExecuteCommand();//等待来自其他客户端对meidaservice服务的请求。


        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }

    
        if(result == TIMED_OUT && !isMain) {//如果超时并且不是主线程,则退出循环
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);//死循环一直等待获取消息

    LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%d\n",
        (void*)pthread_self(), getpid(), result);

    mOut.writeInt32(BC_EXIT_LOOPER);//如果退出了循环,则告诉驱动要退出循环。
    talkWithDriver(false);
}
void IPCThreadState::processPendingDerefs()
{
    if (mIn.dataPosition() >= mIn.dataSize()) {//如果min中无数据
       
        while (mPendingWeakDerefs.size() > 0 || mPendingStrongDerefs.size() > 0) {//mPendingWeakDerefs
        //存储的是要销毁的弱引用,所以必须保证min中无数据,才能销毁。
        //mPendingStrongDerefs存储的是BBbinder实体
            while (mPendingWeakDerefs.size() > 0) {
                RefBase::weakref_type* refs = mPendingWeakDerefs[0];
                mPendingWeakDerefs.removeAt(0);
                refs->decWeak(mProcess.get());//handle弱引用减1
            }

            if (mPendingStrongDerefs.size() > 0) {

                BBinder* obj = mPendingStrongDerefs[0];//BBbinder实体引用减少1,一般是同进程才会有实体的存储。
                mPendingStrongDerefs.removeAt(0);
                obj->decStrong(mProcess.get());
            }
        }
    }
}

2.10.2 IPCThreadState::getAndExecuteCommand

1.此时作用为,用于向驱动发送BC_ENTER_LOOPER消息

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();//先将驱动发送BC_ENTER_LOOPER,将自己注册到Binder线程池中
    //然后会阻塞在read中,等待消息的读取
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();//如果有消息
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();//读取cmd
        


        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;//正在执行的线程数+1
        if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs == 0) {//不能超出线程池上限
            mProcess->mStarvationStartTimeMs = uptimeMillis();
        }
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        result = executeCommand(cmd);//执行获取来的命令

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;//正在执行的线程数-1
        if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs != 0) {
            int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
            if (starvationTimeMs > 100) {
                ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
                      mProcess->mMaxThreads, starvationTimeMs);
            }
            mProcess->mStarvationStartTimeMs = 0;
        }
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
    }

    return result;
}

2.10.3 IPCThreadState::talkWithDriver

status_t IPCThreadState::talkWithDriver(bool doReceive)//doReceive默认是true
{
	/**
    if (mProcess->mDriverFD < 0) {
        return -EBADF;
    }
	*/

    binder_write_read bwr;
	//struct binder_write_read {
	//binder_size_t		write_size;//要写入的字节数,write_buffer的总字节数
	//binder_size_t		write_consumed;//驱动程序占用的字节数,write_buffer已消费的字节数
	//binder_uintptr_t	write_buffer;//写缓冲数据的指针
	//binder_size_t		read_size;//要读的字节数,read_buffer的总字节数
	//binder_size_t		read_consumed;//驱动程序占用的字节数,read_buffer已消费的字节数
	//binder_uintptr_t	read_buffer;//读缓存数据的指针
	//};

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//此时needRead是ture
	//因为mIn.dataPosition返回值和mIn.dataSize相等


	//当我们正在从min中读取数据,或者调用方打算读取下一条数据(doReceive为true时),我们不会写入任何内容。
	//此时outAvail值等于mOut.dataSize()
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;//write_size是mOut.dataSize()
    bwr.write_buffer = (uintptr_t)mOut.data();

	
    if (doReceive && needRead) {//当我们需要从驱动中读的时候。
        bwr.read_size = mIn.dataCapacity();//设置大小为256字节
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
	/*
	else {//当不读的时候,设置读的大小和buffer为0
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/
	

    // 如果读缓冲区和写缓冲区都为0,代表无事可做,立即返回,此时write_size中有数据。
	/**
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
	*/

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//向binder驱动区写,因为此时是注册当前线程成为binder主looper线程,所以mout有数据
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
		/**
        if (mProcess->mDriverFD < 0) {
            err = -EBADF;
        }
		*/
    } while (err == -EINTR);


    if (err >= NO_ERROR) {//代表驱动收到了消息
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())//如果驱动消费的数据大小小于mout的大小,则说明驱动没有消费mout数据
                LOG_ALWAYS_FATAL("Driver did not consume write buffer. "
                                 "err: %s consumed: %zu of %zu",
                                 statusToString(err).c_str(),
                                 (size_t)bwr.write_consumed,
                                 mOut.dataSize());
            else {//代表mout被正确消费
                mOut.setDataSize(0);//重置数据大小为0
                processPostWriteDerefs();//主要是将写的引用计数减少1,释放
            }
        }
		/**
        if (bwr.read_consumed > 0) {//如果驱动读的数据大小大于0
            mIn.setDataSize(bwr.read_consumed);//设置mIn的大小
            mIn.setDataPosition(0);//设置min数据起始位置
        }
		*/
        return NO_ERROR;
    }

    ///return err;
}

2.10.4 binder_ioctl

//此时数据是BC_ENTER_LOOPER,cmd是BINDER_WRITE_READ
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
	int ret;
	struct binder_proc *proc = filp->private_data;//取出此fd的proc对象
	struct binder_thread *thread;//此sm进程对应的binder线程
	unsigned int size = _IOC_SIZE(cmd);
	void __user *ubuf = (void __user *)arg;//是bwr

	trace_binder_ioctl(cmd, arg);

	ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
	if (ret)
		goto err_unlocked;

	binder_lock(__func__);
	thread = binder_get_thread(proc);//获取此proc的binder_thread
	if (thread == NULL) {
		ret = -ENOMEM;
		goto err;
	}

	switch (cmd) {
	case BINDER_WRITE_READ: {
		struct binder_write_read bwr;
		if (size != sizeof(struct binder_write_read)) {//查看大小是否正确
			ret = -EINVAL;
			goto err;
		}
		if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//从用户空间复制数据到内核空间
		//第一个参数to是内核空间的数据目标地址指针,
		//第二个参数from是用户空间的数据源地址指针,
		//第三个参数n是数据的长度。
			ret = -EFAULT;
			goto err;
		}

		if (bwr.write_size > 0) {//当写缓存有数据的时候,执行写操作
			ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
			//参数分析:
			//proc代表sm对象的proc
			//thread为此sm进程的binder线程
			//bwr.write_buffer,内核数据的起始地址
			//write_size,数据大小
			//write_consumed,驱动程序已消费的数据大小
			trace_binder_write_done(ret);
			/**
			if (ret < 0) {//如果写失败,再将bwr的值写回给ubuf
				bwr.read_consumed = 0;
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					//第一个参数是用户空间的指针,
					//第二个参数是内核空间指针,
					//n表示从内核空间向用户空间拷贝数据的字节数
					ret = -EFAULT;
				goto err;
			}
			*/
		}
		/**
		if (bwr.read_size > 0) {//当读缓存有数据的时候,执行读操作,此时读缓存无数据
			ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
			trace_binder_read_done(ret);
			if (!list_empty(&proc->todo))
				wake_up_interruptible(&proc->wait);
			if (ret < 0) {
				if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
					ret = -EFAULT;
				goto err;
			}
		}
		*/
		if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//将此内核空间数据,拷贝到ubuf中,此时是写的消费的大小write_consumed从变成了4字节。
			ret = -EFAULT;
			goto err;
		}
		break;
	}
	ret = 0;
err:
	if (thread)
		thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
	binder_unlock(__func__);
	wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
	if (ret && ret != -ERESTARTSYS)
		printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
	trace_binder_ioctl_done(ret);
	return ret;
}

2.10.5 binder_thread_write

1.驱动侧为当前线程设置已经looper,表示已经准备好循环接受驱动消息。

int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
			void __user *buffer, int size, signed long *consumed)
			//参数分析:
			//proc代表sm对象的proc
			//thread为此sm进程的binder线程
			//bwr.write_buffer,内核数据的起始地址,数据是BC_ENTER_LOOPER
			//write_size,4字节,数据大小
			//consumed=0,驱动程序已消费的数据大小
{
	uint32_t cmd;
	void __user *ptr = buffer + *consumed;//首地址+0,还是写buffer首地址。
	void __user *end = buffer + size;//buffer的尾地址。

	while (ptr < end && thread->return_error == BR_OK) {
		if (get_user(cmd, (uint32_t __user *)ptr))//从写buffer中获取命令给cmd,即此时是BC_ENTER_LOOPER
			return -EFAULT;
		ptr += sizeof(uint32_t);//让buffer的地址跳过BC_ENTER_LOOPER,因为buffer中可能还有其他数据。此时是没数据了
		trace_binder_command(cmd);
		if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {//记录信息
			binder_stats.bc[_IOC_NR(cmd)]++;
			proc->stats.bc[_IOC_NR(cmd)]++;
			thread->stats.bc[_IOC_NR(cmd)]++;
		}
		switch (cmd) {
		case BC_ENTER_LOOPER:
		/**
			if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {//如果此looper已经注册过,则错误
				thread->looper |= BINDER_LOOPER_STATE_INVALID;
				binder_user_error("binder: %d:%d ERROR:"
					" BC_ENTER_LOOPER called after "
					"BC_REGISTER_LOOPER\n",
					proc->pid, thread->pid);
			}
			*/
			thread->looper |= BINDER_LOOPER_STATE_ENTERED;//设置为此binder线程已经注册过了。
			break;
		}
		*consumed = ptr - buffer;//已消费的字节大小,此时为4字节
	}
	return 0;
}

2.10.6 死循环用于获取来自驱动的消息

1.当向驱动发送完BC_ENTER_LOOPER的消息后,会再走到IPCThreadState::joinThreadPool函数的死循环处。

2.getAndExecuteCommand会执行talkWithDriver函数,此函数上文已多次出现,然后会阻塞,等待消息的来临

void IPCThreadState::joinThreadPool(bool isMain)//isMain是true
{
    do {
        processPendingDerefs();//处理一些引用计数
        
        result = getAndExecuteCommand();//等待来自其他客户端对meidaservice服务的请求。


        if (result < NO_ERROR && result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) {
            ALOGE("getAndExecuteCommand(fd=%d) returned unexpected error %d, aborting",
                  mProcess->mDriverFD, result);
            abort();
        }

    
        if(result == TIMED_OUT && !isMain) {//如果超时并且不是主线程,则退出循环
            break;
        }
    } while (result != -ECONNREFUSED && result != -EBADF);//死循环一直等待获取消息
}

 2.10.7 getAndExecuteCommand

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    result = talkWithDriver();//此时会阻塞在read中,等待消息的读取
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();//如果有消息
        if (IN < sizeof(int32_t)) return result;
        cmd = mIn.readInt32();//读取cmd
        


        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;//正在执行的线程数+1
        if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs == 0) {//不能超出线程池上限
            mProcess->mStarvationStartTimeMs = uptimeMillis();
        }
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        result = executeCommand(cmd);//执行获取来的命令

        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;//正在执行的线程数-1
        if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs != 0) {
            int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
            if (starvationTimeMs > 100) {
                ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
                      mProcess->mMaxThreads, starvationTimeMs);
            }
            mProcess->mStarvationStartTimeMs = 0;
        }
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
    }

    return result;
}

2.10.8 talkWithDriver

1.此时当前专门用于ipc通信的线程会阻塞,等待来自驱动的消息。

status_t IPCThreadState::talkWithDriver(bool doReceive)//默认doReceive是true
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    const bool needRead = mIn.dataPosition() >= mIn.dataSize();//此时needRead是true

    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;//此时outAvail是0

    bwr.write_size = outAvail;//write_size是0
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {//需要读取数据
        bwr.read_size = mIn.dataCapacity();//设置需要读取的大小。
        bwr.read_buffer = (uintptr_t)mIn.data();
    }
     /*else {//不执行
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }*/



    // 不会执行
    //if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {

#if defined(__ANDROID__)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//从驱动中读取消息,在这里面
        //会线程休眠
            err = NO_ERROR;
        else
            err = -errno;
		/*
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }

    return err;
	*/
}

2.10.9 binder_ioctl

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    struct binder_proc *proc = filp->private_data;//取出meidaservice进程对应的porc对象
    struct binder_thread *thread;//meidaservice进进程的binder线程
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;//__user表示用户空间的指针

    trace_binder_ioctl(cmd, arg);

    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,立即返回,不休眠
    if (ret)
        goto err_unlocked;

    binder_lock(__func__);
    thread = binder_get_thread(proc);//获取binder_thread
    if (thread == NULL) {
        ret = -ENOMEM;
        goto err;
    }

    switch (cmd) {
    case BINDER_WRITE_READ: {
        struct binder_write_read bwr;
        if (size != sizeof(struct binder_write_read)) {//计算数据是否和规范
            ret = -EINVAL;
            goto err;
        }
        if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//拷贝数据到内核空间
            ret = -EFAULT;
            goto err;
        }


        /*if (bwr.write_size > 0) {//此时write_siz=0,不执行
            ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
            trace_binder_write_done(ret);
            if (ret < 0) {
                bwr.read_consumed = 0;
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
      */
        if (bwr.read_size > 0) {//此时read_size>0
            ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
            //此时meidaservice的binder线程会阻塞在这里。后面暂不执行,等待唤醒时,才会执行
            /*
            if (!list_empty(&proc->todo))
                wake_up_interruptible(&proc->wait);
            if (ret < 0) {
                if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
                    ret = -EFAULT;
                goto err;
            }
        }
        binder_debug(BINDER_DEBUG_READ_WRITE,
                 "binder: %d:%d wrote %ld of %ld, read return %ld of %ld\n",
                 proc->pid, thread->pid, bwr.write_consumed, bwr.write_size,
                 bwr.read_consumed, bwr.read_size);
        if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
            ret = -EFAULT;
            goto err;
        }
        break;
    }
    




    }
    ret = 0;
err:
    if (thread)
        thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
    binder_unlock(__func__);
    wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret && ret != -ERESTARTSYS)
        printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
    trace_binder_ioctl_done(ret);
    return ret;
 */
}

2.10.10 binder_thread_read

static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  void  __user *buffer, int size,
                  signed long *consumed, int non_block)
         //参数分析:
         //proc是media是mediaservice服务的proc
         //thread是meidaservice服务的线程
         //buffer指向read_buffer,读的首地址
         //read_size>0
         //read_consumed是0
         //non_block是0,表示是阻塞的
{
    void __user *ptr = buffer + *consumed;//指向buffer首地址
    void __user *end = buffer + size;//指向尾地址

    int ret = 0;
    int wait_for_proc_work;

    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))//向里面放一个BR_NOOP命令
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }

retry:
    wait_for_proc_work = thread->transaction_stack == NULL &&
                list_empty(&thread->todo);//此时当前线程的运输事务不空,即transaction_stack不为空



    thread->looper |= BINDER_LOOPER_STATE_WAITING;//设置等待的flag
    /*if (wait_for_proc_work)//不执行
        proc->ready_threads++;*/

    binder_unlock(__func__);


    /*if (wait_for_proc_work) {//wait_for_proc_work是false,不执行
        if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
                    BINDER_LOOPER_STATE_ENTERED))) {

            wait_event_interruptible(binder_user_error_wait,
                         binder_stop_on_user_error < 2);
        }
        binder_set_nice(proc->default_priority);
        if (non_block) {
            if (!binder_has_proc_work(proc, thread))
                ret = -EAGAIN;
        } else
            ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
    } */
 else {//走这里
        /*if (non_block) {不执行
            if (!binder_has_thread_work(thread))
                ret = -EAGAIN;
        } */
  else
            ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));//meidaservice线程陷入休眠
    }
//后面都不执行,故省略.....
}

至此所有流程分析完毕。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2160167.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

为什么AI不会夺去软件工程师的工作?

▼ 自从AI大模型爆火以来&#xff0c;我每天的工作中&#xff0c;已经有大量的真实代码是通过AI完成的。人工智能辅助下的编程&#xff0c;确实大幅减轻了我的工作负担&#xff0c;大大提高了生产力。 大语言模型是如此成功&#xff0c;以至于无可避免地在开发者社区中引起了…

TortoiseSVN提交时忽略某些文件夹,不让它在提交列表中出现

1.首先右键&#xff0c;点击属性 2.新建一个忽略规则&#xff0c;点击确定即可

【React】原理

笔记来源&#xff1a;小满zs 虚拟 DOM // react.js // jsx > babel | swc > React.createElement const React {createElement(type, props, ...children) {return {type,props: {...props,children: children.map(child > typeof child object ? child : React.cr…

算法揭秘:时间复杂度与空间复杂度的实用指南

在我们编程的过程中&#xff0c;算法是解决问题的核心。而在评估算法的优劣时&#xff0c;时间复杂度和空间复杂度是两个不可或缺的概念。无论你是刚入门的编程小白&#xff0c;还是希望深入了解的学习者&#xff0c;理解这两个概念都能帮助你写出更高效的代码。今天&#xff0…

一步到位的智慧:BI可视化大屏在复杂环境中如何精准拾取目标

在可视化设计器中实现良好的组件拾取功能&#xff0c;是提升用户体验和设计效率的关键。它们不仅能够提升用户体验和操作效率&#xff0c;还能够增强设计的灵活性和精度&#xff0c;促进设计创新&#xff0c;并最终提升设计的质量和价值。因此&#xff0c;在可视化设计过程中&a…

【leetcode】环形链表、最长公共前缀

题目&#xff1a;环形链表 解法一&#xff1a;哈希表 创建一个哈希表&#xff0c;遍历链表先判断哈希表中是否含有要放入哈希表中的节点&#xff0c;如果该节点已在哈希表中出现那么说明该链表是环形的&#xff1b;如果链表节点出现nullptr那么就退出循环&#xff0c;该链表是…

AI美女横扫小红书:虚拟魅力如何颠覆网红时代?真真假假难辨,但是一样美!

最近&#xff0c; 关于AI美女在小红书上“屠版”的消息引发了广泛讨论。根据一位网友的群聊记录&#xff0c;他声称利用文生图模型生成AI美女图片&#xff0c;并通过账号矩阵管理软件操控了1327个小红书账号&#xff0c;成功将平台“屠版”。 更令人惊讶的是&#xff0c;小红…

React-Native 中使用 react-native-image-crop-picker 在华为手机上不能正常使用拍照功能

背景: React-Native 0.66 中使用 react-native-image-crop-picker 在安卓 华为手机上不能正常使用拍照功能, 其他品牌正常 代码如下: import ImagePicker from react-native-image-crop-picker;ImagePicker.openCamera(photoOptions).then(image > {callback(image);}) …

html+css(如何用css做出京东页面,静态版)

<!DOCTYPE html> <html lang"en"> <head><meta charset"UTF-8"><meta name"viewport" content"widthdevice-width, initial-scale1.0"><title>京东</title><link rel"stylesheet&q…

对c语言中的指针进行深入全面的解析

1.普通的指针: 实际上指针就是存放地址的变量&#xff0c;eg: int a10; int *p&a; 拆分一下int *中的*说明p是一个指针&#xff0c;int是它所指向的类型&#xff1b; 2.字符串指针和字符串数组 char*str1"abcd"; 先看这一个&#xff0c;这个就是一个字符串…

振弦式渗压计智慧水利工程 适用恶劣环境有保障

产品概述 振弦式渗压计适合埋设在水工建筑物和基岩内&#xff0c;或安装在测压管、钻孔、堤坝、管道或压力容器中&#xff0c;以测量孔隙水压力或液位。主要部件均采用特殊钢材制造&#xff0c;适合在各种恶劣环境中使用。特殊的稳定补偿技术使传感器具有极小的温度补偿系数。…

量产AI美女?一文讲清“数字尤物”背后的AI绘画的商机

这些AI美女都有刷到过吧&#xff1f;从国外Youtube的视频封面图的丰满hotgirl&#xff0c;到小红书笔记各式风格数字尤物&#xff0c;都已悄咪咪混入我们的社交媒体,而且“她们”的伪装技能越发满级! 更多实操教程和AI绘画工具&#xff0c;可以扫描下方&#xff0c;免费获取 本…

(undone) 声音信号处理基础知识(10) (Demystifying the Fourier Transform: The Intuition)

参考&#xff1a;https://www.youtube.com/watch?vXQ45IgG6rJ4 FT 可以把时域信息转为频域信息 以下是对于 FT 的一些 intuition-level 的理解&#xff1a; 1.FT 会把原始信号跟不同频率的一系列正弦波对比 2.对于每一个正弦波频率&#xff0c;我们会得到一个标量 和 一个相…

Unreal Engine 5 C++: 编辑器工具编写入门01(中文解释)

目录 准备工作 1.创建插件 2.修改插件设置 快速资产操作&#xff08;quick asset action) 自定义编辑器功能 0.创建编辑器button&#xff0c;测试debug message功能 大致流程 详细步骤 1.ctrlF5 launch editor 2.创建新的cpp class&#xff0c;derived from AssetAction…

Vue中nextTick的底层原理

Vue中nextTick的底层原理 前言一、异步更新队列二、前置知识2.1 JS 运行机制2.2 异步任务的类型 三、nextTick 实现原理3.1 Vue.nextTick 内部逻辑3.2 vm.$nextTick 内部逻辑3.3 源码解读3.4 为什么优先使用微任务&#xff1a; 前言 知其然且知其所以然&#xff0c;Vue 作为目…

UWB为什么是首选的室内定位技术

超宽带 (UWB) 是一种基于 IEEE 802.15.4a 和 802.15.4z 标准的无线通信技术&#xff0c;能够非常准确地测量无线电信号的飞行时间&#xff0c;从而实现厘米级精度的距离/位置测量。 除了这一独特功能外&#xff0c;UWB 还提供数据通信能力&#xff0c;且功耗极低&#xff0c;使…

【包教包会】CocosCreator3.x框架——音频模块(无需导入、无需常驻节点)

下载地址&#xff1a;AudioDemo3.x: CocosCreator3.x框架——音频模块 注意事项&#xff1a; 1、gi.musicPlay、gi.soundPlay是同步函数&#xff0c;使用前必须先将音频加载到缓存 Demo通过SceneLoading实现了一个极简的Loading页面&#xff0c;将音频全部加载后进入游戏&…

【Qt笔记】QStackedWidget控件详解

目录 引言 一、基础功能 二、属性设置 2.1 属性介绍 2.2 代码示例 2.3 代码解析 三、常用API 3.1 添加子部件 3.2 插入子部件 3.3 移除子部件 3.4 设置当前页面索引值 3.5 设置当前显示子部件 3.6 返回索引处子部件指针 3.7 返回子部件索引值 四、信号与槽 4.…

device靶机详解

靶机下载地址 https://www.vulnhub.com/entry/unknowndevice64-1,293/ 靶机配置 主机发现 arp-scan -l 端口扫描 nmap -sV -A -T4 192.168.229.159 nmap -sS -Pn -A -p- -n 192.168.229.159 这段代码使用nmap工具对目标主机进行了端口扫描和服务探测。 -sS&#xff1a;使用…

C++存储数据单位转换输出字符串

C封装存储数据单位转换, 方便将输入数据以指定方式输出 main.cpp #include <wtypesbase.h> #include <string> #include <vector> #include <tchar.h>#ifdef _UNICODE using _tstring std::wstring; #else using _tstring std::string; #endif// 数…