深入内核讲明白Android Binder【二】
- 前言
- 一、Binder通信内核源码整体思路概述
- 1. 客户端向服务端发送数据流程概述
- 1.1 binder_ref
- 1.2 binder_node
- 1.3 binder_proc
- 1.4 binder_thread
- 2. 服务端的binder_node是什么时候被创建的呢?
- 2.1 Binder驱动程序为服务创建binder_node
- 2.2 service_manager引用binder_node
- 2.3 client向service_manager查询服务
- 2.4 client通过查询到的handle,向服务发送数据
- 3. Binder通信数据传输过程概述
- 3.1 client向server发送数据:先写后读
- 3.2 server获取client数据:先读后写
- 3.3 内核态和用户态的数据如何复制?
- 1. 一般方法,需要两次复制
- 2. binder方法,只需要一次复制
- 二、服务的注册过程内核源码解析
- 1. 注册服务的用户态源码回顾
- 2. 通过ioctl发送数据给内核Binder驱动程序的源码解析
- 1. 先总结一下服务端发送给内核Binder驱动程序的数据
- 2. 服务端调用ioctl,对应会调用到内核Binder驱动程序中的binder_ioctl函数,[点击查看源码](https://github.com/torvalds/linux/blob/master/drivers/android/binder.c#L5692)
- 2.1 binder_get_thread获取服务进程的binder_thread或者创建binder_thread
- 2.2 调用binder_ioctl_write_read处理服务端数据
- 2.3 分析binder_thread_write
- 2.4 分析binder_transaction
- 2.5 service_manager的binder_node是什么时候创建?
- 2.6 service_manager的binder_proc是什么时候创建?
- 2.7 接着2.4节的binder_transaction继续分析
- 2.8 继续分析binder_transaction
- 2.9 分析binder_translate_binder
- 2.10 继续分析binder_transaction
- 3. service_manager被唤醒,接着分析service_manager的源码
- 3.1 service_manager等待数据
- 3.2 binder_thread_read读取数据
- 3.3 唤醒service_manager后,读取到的数据组织形式
- 3.4 继续分析service_manager读取到客户端数据后源码
- 3. 服务注册过程简要总结图
前言
上面文章深入内核讲明白Android Binder【一】,我们讲解了如何使用C语言编写Binder跨进程通信应用,并对编写的Binder Demo源码进行详细分析,但我们分析源码时跳过了内核Binder驱动相关的代码。那么,本篇文章将深入Linux内核,讲解Binder驱动程序的源码。阅读完本篇文章你将深入内核了解Binder跨进程通信到底是如何工作的,同时,深入内核搞懂Binder驱动,也将为你阅读Android源码打下坚实的基础。
本来这篇文章想要把Binder深入Linux内核的源码全部讲解完成,但为了深入理解内核源码,我写的很详细,在撰写过程中,发现把服务注册过程的内核源码解析完,篇幅就已经很长了,为了便于博友阅读,本篇详解binder驱动中,服务注册过程的内核源码,下一篇文章再详解binder驱动程序中,服务获取过程和服务使用过程的内核源码。
一、Binder通信内核源码整体思路概述
我们先大概了解一下Binder内核驱动的整体思路,这样后面再看源码解析就不会那么没有目的了。
1. 客户端向服务端发送数据流程概述
情景介绍
客户端进程:A
服务端进程:B
A通过获得B中提供的服务的引入handle来获取服务,进程提供服务的引用通过binder_ref中的一个整数desc来代表。
进程A想向进程B发送数据,调用ioctl传入handle,根据handle,可以找到服务的引用binder_ref,通过binder_ref可以找到服务binder_node,通过binder_node可以找到提供服务的进程binder_proc,binder_proc中的thread红黑树挂有不同client请求服务对应的线程binder_thread,流程如下:
上面的流程中涉及几个重要的Binder驱动程序的结构体,后面我们会详细讲解,这里先把结构体的构造源码给出。点击查看linux源码地址
note: 这些结构体都是在内核空分配内存
1.1 binder_ref
点击查看binder_ref源码
binder_ref用于保存binder_node指针,它是寻找服务的起点。
/**
* struct binder_ref - struct to track references on nodes
* @data: binder_ref_data containing id, handle, and current refcounts
* @rb_node_desc: node for lookup by @data.desc in proc's rb_tree
* @rb_node_node: node for lookup by @node in proc's rb_tree
* @node_entry: list entry for node->refs list in target node
* (protected by @node->lock)
* @proc: binder_proc containing ref
* @node: binder_node of target node. When cleaning up a
* ref for deletion in binder_cleanup_ref, a non-NULL
* @node indicates the node must be freed
* @death: pointer to death notification (ref_death) if requested
* (protected by @node->lock)
*
* Structure to track references from procA to target node (on procB). This
* structure is unsafe to access without holding @proc->outer_lock.
*/
struct binder_ref {
/* Lookups needed: */
/* node + proc => ref (transaction) */
/* desc + proc => ref (transaction, inc/dec ref) */
/* node => refs + procs (proc exit) */
struct binder_ref_data data;
struct rb_node rb_node_desc;
struct rb_node rb_node_node;
struct hlist_node node_entry;
struct binder_proc *proc;
struct binder_node *node;
struct binder_ref_death *death;
};
1.2 binder_node
点击查看binder_node源码
binder_node代表进程提供的服务,每个服务进程向servic_manager注册服务的时候,就在Binder驱动程序中创建了该对象。
/**
* struct binder_node - binder node bookkeeping
* @debug_id: unique ID for debugging
* (invariant after initialized)
* @lock: lock for node fields
* @work: worklist element for node work
* (protected by @proc->inner_lock)
* @rb_node: element for proc->nodes tree
* (protected by @proc->inner_lock)
* @dead_node: element for binder_dead_nodes list
* (protected by binder_dead_nodes_lock)
* @proc: binder_proc that owns this node
* (invariant after initialized)
* @refs: list of references on this node
* (protected by @lock)
* @internal_strong_refs: used to take strong references when
* initiating a transaction
* (protected by @proc->inner_lock if @proc
* and by @lock)
* @local_weak_refs: weak user refs from local process
* (protected by @proc->inner_lock if @proc
* and by @lock)
* @local_strong_refs: strong user refs from local process
* (protected by @proc->inner_lock if @proc
* and by @lock)
* @tmp_refs: temporary kernel refs
* (protected by @proc->inner_lock while @proc
* is valid, and by binder_dead_nodes_lock
* if @proc is NULL. During inc/dec and node release
* it is also protected by @lock to provide safety
* as the node dies and @proc becomes NULL)
* @ptr: userspace pointer for node
* (invariant, no lock needed)
* @cookie: userspace cookie for node
* (invariant, no lock needed)
* @has_strong_ref: userspace notified of strong ref
* (protected by @proc->inner_lock if @proc
* and by @lock)
* @pending_strong_ref: userspace has acked notification of strong ref
* (protected by @proc->inner_lock if @proc
* and by @lock)
* @has_weak_ref: userspace notified of weak ref
* (protected by @proc->inner_lock if @proc
* and by @lock)
* @pending_weak_ref: userspace has acked notification of weak ref
* (protected by @proc->inner_lock if @proc
* and by @lock)
* @has_async_transaction: async transaction to node in progress
* (protected by @lock)
* @accept_fds: file descriptor operations supported for node
* (invariant after initialized)
* @min_priority: minimum scheduling priority
* (invariant after initialized)
* @txn_security_ctx: require sender's security context
* (invariant after initialized)
* @async_todo: list of async work items
* (protected by @proc->inner_lock)
*
* Bookkeeping structure for binder nodes.
*/
struct binder_node {
int debug_id;
spinlock_t lock;
struct binder_work work;
union {
struct rb_node rb_node;
struct hlist_node dead_node;
};
struct binder_proc *proc;
struct hlist_head refs;
int internal_strong_refs;
int local_weak_refs;
int local_strong_refs;
int tmp_refs;
binder_uintptr_t ptr;
binder_uintptr_t cookie;
struct {
/*
* bitfield elements protected by
* proc inner_lock
*/
u8 has_strong_ref:1;
u8 pending_strong_ref:1;
u8 has_weak_ref:1;
u8 pending_weak_ref:1;
};
struct {
/*
* invariant after initialization
*/
u8 accept_fds:1;
u8 txn_security_ctx:1;
u8 min_priority;
};
bool has_async_transaction;
struct list_head async_todo;
};
1.3 binder_proc
点击查看binder_proc源码
binder_proc,代表对应提供服务的进程
/**
* struct binder_proc - binder process bookkeeping
* @proc_node: element for binder_procs list
* @threads: rbtree of binder_threads in this proc
* (protected by @inner_lock)
* @nodes: rbtree of binder nodes associated with
* this proc ordered by node->ptr
* (protected by @inner_lock)
* @refs_by_desc: rbtree of refs ordered by ref->desc
* (protected by @outer_lock)
* @refs_by_node: rbtree of refs ordered by ref->node
* (protected by @outer_lock)
* @waiting_threads: threads currently waiting for proc work
* (protected by @inner_lock)
* @pid PID of group_leader of process
* (invariant after initialized)
* @tsk task_struct for group_leader of process
* (invariant after initialized)
* @cred struct cred associated with the `struct file`
* in binder_open()
* (invariant after initialized)
* @deferred_work_node: element for binder_deferred_list
* (protected by binder_deferred_lock)
* @deferred_work: bitmap of deferred work to perform
* (protected by binder_deferred_lock)
* @outstanding_txns: number of transactions to be transmitted before
* processes in freeze_wait are woken up
* (protected by @inner_lock)
* @is_dead: process is dead and awaiting free
* when outstanding transactions are cleaned up
* (protected by @inner_lock)
* @is_frozen: process is frozen and unable to service
* binder transactions
* (protected by @inner_lock)
* @sync_recv: process received sync transactions since last frozen
* bit 0: received sync transaction after being frozen
* bit 1: new pending sync transaction during freezing
* (protected by @inner_lock)
* @async_recv: process received async transactions since last frozen
* (protected by @inner_lock)
* @freeze_wait: waitqueue of processes waiting for all outstanding
* transactions to be processed
* (protected by @inner_lock)
* @todo: list of work for this process
* (protected by @inner_lock)
* @stats: per-process binder statistics
* (atomics, no lock needed)
* @delivered_death: list of delivered death notification
* (protected by @inner_lock)
* @max_threads: cap on number of binder threads
* (protected by @inner_lock)
* @requested_threads: number of binder threads requested but not
* yet started. In current implementation, can
* only be 0 or 1.
* (protected by @inner_lock)
* @requested_threads_started: number binder threads started
* (protected by @inner_lock)
* @tmp_ref: temporary reference to indicate proc is in use
* (protected by @inner_lock)
* @default_priority: default scheduler priority
* (invariant after initialized)
* @debugfs_entry: debugfs node
* @alloc: binder allocator bookkeeping
* @context: binder_context for this proc
* (invariant after initialized)
* @inner_lock: can nest under outer_lock and/or node lock
* @outer_lock: no nesting under innor or node lock
* Lock order: 1) outer, 2) node, 3) inner
* @binderfs_entry: process-specific binderfs log file
* @oneway_spam_detection_enabled: process enabled oneway spam detection
* or not
*
* Bookkeeping structure for binder processes
*/
struct binder_proc {
struct hlist_node proc_node;
struct rb_root threads;
struct rb_root nodes;
struct rb_root refs_by_desc;
struct rb_root refs_by_node;
struct list_head waiting_threads;
int pid;
struct task_struct *tsk;
const struct cred *cred;
struct hlist_node deferred_work_node;
int deferred_work;
int outstanding_txns;
bool is_dead;
bool is_frozen;
bool sync_recv;
bool async_recv;
wait_queue_head_t freeze_wait;
struct list_head todo;
struct binder_stats stats;
struct list_head delivered_death;
u32 max_threads;
int requested_threads;
int requested_threads_started;
int tmp_ref;
long default_priority;
struct dentry *debugfs_entry;
struct binder_alloc alloc;
struct binder_context *context;
spinlock_t inner_lock;
spinlock_t outer_lock;
struct dentry *binderfs_entry;
bool oneway_spam_detection_enabled;
};
1.4 binder_thread
点击查看binder_thread源码
binder_thread是服务进程最终用于执行服务的线程。
/**
* struct binder_thread - binder thread bookkeeping
* @proc: binder process for this thread
* (invariant after initialization)
* @rb_node: element for proc->threads rbtree
* (protected by @proc->inner_lock)
* @waiting_thread_node: element for @proc->waiting_threads list
* (protected by @proc->inner_lock)
* @pid: PID for this thread
* (invariant after initialization)
* @looper: bitmap of looping state
* (only accessed by this thread)
* @looper_needs_return: looping thread needs to exit driver
* (no lock needed)
* @transaction_stack: stack of in-progress transactions for this thread
* (protected by @proc->inner_lock)
* @todo: list of work to do for this thread
* (protected by @proc->inner_lock)
* @process_todo: whether work in @todo should be processed
* (protected by @proc->inner_lock)
* @return_error: transaction errors reported by this thread
* (only accessed by this thread)
* @reply_error: transaction errors reported by target thread
* (protected by @proc->inner_lock)
* @ee: extended error information from this thread
* (protected by @proc->inner_lock)
* @wait: wait queue for thread work
* @stats: per-thread statistics
* (atomics, no lock needed)
* @tmp_ref: temporary reference to indicate thread is in use
* (atomic since @proc->inner_lock cannot
* always be acquired)
* @is_dead: thread is dead and awaiting free
* when outstanding transactions are cleaned up
* (protected by @proc->inner_lock)
*
* Bookkeeping structure for binder threads.
*/
struct binder_thread {
struct binder_proc *proc;
struct rb_node rb_node;
struct list_head waiting_thread_node;
int pid;
int looper; /* only modified by this thread */
bool looper_need_return; /* can be written by other thread */
struct binder_transaction *transaction_stack;
struct list_head todo;
bool process_todo;
struct binder_error return_error;
struct binder_error reply_error;
struct binder_extended_error ee;
wait_queue_head_t wait;
struct binder_stats stats;
atomic_t tmp_ref;
bool is_dead;
};
2. 服务端的binder_node是什么时候被创建的呢?
2.1 Binder驱动程序为服务创建binder_node
server传入flat_binder_object给驱动,然后在内核态驱动程序中为每个服务创建binder_node,binder_node中的proc就是server进程。
2.2 service_manager引用binder_node
service_manager在内核态驱动程序中创建binder_ref来引用binder_node,binder_ref.desc=1,2,3…等整数代表不同的binder_node。在用户态创建服务链表,链表中包含服务的名字name和服务的引用handle,handle的值对应的是binder_ref.desc的值
2.3 client向service_manager查询服务
- client向service_manager传一个服务名字name
- service_manager返回handle给驱动程序
- 驱动程序在service_manager的binder_ref红黑树中,根据handle找到对应的binder_ref,再根据binder_ref.node找到binder_node,最后,给client创建新的binder_ref,指向binder_node,它的desc从1开始
- 驱动返回desc给client,它即是handle
2.4 client通过查询到的handle,向服务发送数据
- 驱动根据handle找到binder_ref
- 根据binder_ref找到binder_node
- 根据binder_node找到server进程
3. Binder通信数据传输过程概述
3.1 client向server发送数据:先写后读
- client构造数据,调用ioctl发数据
- 驱动程序中根据handle找到server进程
- 把数据放入server进程binder_proc.todo链表中,唤醒server的读操作
- client休眠
- client被唤醒
- client从todo链表中取出数据,返回用户空间
- 处理数据
3.2 server获取client数据:先读后写
- 发起读操作,无数据时就休眠
- 从todo链表中取出数据,返回用户空间
- 处理数据
- 把处理结果写给client,放入client的binder_proc.todo链表,唤醒client的读操作
3.3 内核态和用户态的数据如何复制?
1. 一般方法,需要两次复制
不使用mmap,mmap可以实现在用户态直接操作内核态的内存
- client构造数据
- 调用驱动,在驱动中copy_from_user
- 在驱动中处理用户态数据
- server调用驱动,在驱动中copy_to_user
- 在用户态处理驱动中的数据
2. binder方法,只需要一次复制
只需要一次复制,指的是数据本身只需要复制一次,但数据头还是要复制两次的。
- server通过mmap,实现在server用户态可以直接访问内核态驱动中某块内存Binder_buffer
- client构造数据
- 调用驱动,在驱动中copy_from_user,将client用户态的数据拷贝到server通过mmap可以在用户态直接访问的那块内核态驱动中的内存
- server程序可以在用户态直接使用内核态的数据
二、服务的注册过程内核源码解析
上篇文章深入内核讲明白Android Binder【一】,我们已经讲过向service_manager注册hello服务的用户态源码,下面我们简单过一遍,然后进入Binder驱动内核程序一探究竟。
1. 注册服务的用户态源码回顾
ret = svcmgr_publish(bs, svcmgr, "hello", hello_service_handler);
int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr)
{
int status;
// msg实际存储数据的缓存空间,即512个字节,即512B=0.5KB
unsigned iodata[512/4];
struct binder_io msg, reply;
// 划分iodata缓存给msg,iodata的前16个字节存储其它数据头,剩余的缓存空间用于存储需要发送的数据
bio_init(&msg, iodata, sizeof(iodata), 4);
// 从iodata的第17个字节开始,占用4个字节的空间写入数据0,同时更新msg->data的指针位置,以及msg->data_avail的剩余有效缓存大小
bio_put_uint32(&msg, 0); // strict mode header
// #define SVC_MGR_NAME "android.os.IServiceManager"
/*
写入service_manager的名称"android.os.IServiceManager"
内存存储格式:首先占用4个字节写入字符串的长度,然后每个字符占用2个字节,写入字符串
*/
bio_put_string16_x(&msg, SVC_MGR_NAME);
// 写入要注册的服务名称“hello”
bio_put_string16_x(&msg, name);
// ptr是函数地址,构造flat_binder_object对象,将ptr写入flat_binder_object->binder
bio_put_obj(&msg, ptr);
// 调用binder_call向service_manager发送数据
if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE))
return -1;
status = bio_get_uint32(&reply);
binder_done(bs, &msg, &reply);
return status;
}
int binder_call(struct binder_state *bs,
struct binder_io *msg, struct binder_io *reply,
uint32_t target, uint32_t code)
{
int res;
struct binder_write_read bwr;
struct {
uint32_t cmd;
struct binder_transaction_data txn;
} __attribute__((packed)) writebuf;
unsigned readbuf[32];
if (msg->flags & BIO_F_OVERFLOW) {
fprintf(stderr,"binder: txn buffer overflow\n");
goto fail;
}
// 构造binder_transaction_data
writebuf.cmd = BC_TRANSACTION;//ioclt类型
writebuf.txn.target.handle = target;//数据发送给哪个进程
writebuf.txn.code = code;//调用进程的哪个函数
writebuf.txn.flags = 0;
writebuf.txn.data_size = msg->data - msg->data0;//数据本身大小
writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0);//数据头大小,指向binder_node实体(发送端提供服务函数的地址),bio_put_obj(&msg, ptr);
writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0;//指向数据本身内存起点
writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0;//指向数据头内存起点
// 构造binder_write_read
bwr.write_size = sizeof(writebuf);
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) &writebuf;
hexdump(msg->data0, msg->data - msg->data0);
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//调用ioctl发送数据给驱动程序
if (res < 0) {
fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno));
goto fail;
}
res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0);
if (res == 0) return 0;
if (res < 0) goto fail;
}
fail:
memset(reply, 0, sizeof(*reply));
reply->flags |= BIO_F_IOERROR;
return -1;
}
struct binder_write_read {
binder_size_t write_size; /* bytes to write */
binder_size_t write_consumed; /* bytes consumed by driver */
binder_uintptr_t write_buffer;
binder_size_t read_size; /* bytes to read */
binder_size_t read_consumed; /* bytes consumed by driver */
binder_uintptr_t read_buffer;
};
struct binder_transaction_data {
/* The first two are only used for bcTRANSACTION and brTRANSACTION,
* identifying the target and contents of the transaction.
*/
union {
/* target descriptor of command transaction */
__u32 handle;
/* target descriptor of return transaction */
binder_uintptr_t ptr;
} target;
binder_uintptr_t cookie; /* target object cookie */
__u32 code; /* transaction command */
/* General information about the transaction. */
__u32 flags;
__kernel_pid_t sender_pid;
__kernel_uid32_t sender_euid;
binder_size_t data_size; /* number of bytes of data */
binder_size_t offsets_size; /* number of bytes of offsets */
/* If this transaction is inline, the data immediately
* follows here; otherwise, it ends with a pointer to
* the data buffer.
*/
union {
struct {
/* transaction data */
binder_uintptr_t buffer;
/* offsets from buffer to flat_binder_object structs */
binder_uintptr_t offsets;
} ptr;
__u8 buf[8];
} data;
};
从上面源码可以看到向service_manager注册服务,服务端组织好数据,调用binder_call函数将组织的数据通过ioctl发送到内核Binder驱动程序,下面我们分析ioctl函数进入Binder驱动后干了什么。
2. 通过ioctl发送数据给内核Binder驱动程序的源码解析
1. 先总结一下服务端发送给内核Binder驱动程序的数据
binder_io数据转换为binder_write_read发送给内核驱动程序,binder_write_read具体又是通过它携带的binder_transaction_data中的data.ptr.buffer指向binder_io数据内存起点,以实现查找数据;通过binder_transaction_data中data.ptr.offsets指向binder_io数据offs的内存起点,以实现查找指向服务函数的offs数据。
2. 服务端调用ioctl,对应会调用到内核Binder驱动程序中的binder_ioctl函数,点击查看源码
// 服务端调用ioctl发送数据给驱动程序
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
// 对应Binder内核驱动程序调用binder_ioctl函数处理数据
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
// 获取服务的binder_proc,它是在服务打开binder驱动的时候创建的,后面我们会分析
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
void __user *ubuf = (void __user *)arg;
/*pr_info("binder_ioctl: %d:%d %x %lx\n",
proc->pid, current->pid, cmd, arg);*/
binder_selftest_alloc(&proc->alloc);
trace_binder_ioctl(cmd, arg);
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
goto err_unlocked;
//为服务进程proc创建binder_thread
thread = binder_get_thread(proc);
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
// 从上面的分析可知此时cmd=BINDER_WRITE_READ
switch (cmd) {
case BINDER_WRITE_READ:
// 处理服务端数据
ret = binder_ioctl_write_read(filp, arg, thread);
if (ret)
goto err;
break;
......
}
2.1 binder_get_thread获取服务进程的binder_thread或者创建binder_thread
static struct binder_thread *binder_get_thread(struct binder_proc *proc)
{
struct binder_thread *thread;
struct binder_thread *new_thread;
binder_inner_proc_lock(proc);
// 从进程中获取线程,如果有可用线程则直接返回,否则考虑为进程添加新线程。此处,并不添加线程,只是获取线程
thread = binder_get_thread_ilocked(proc, NULL);
binder_inner_proc_unlock(proc);
// 如果没有在服务进程中找到可用的线程,则为服务进程创建新线程
if (!thread) {
// 分配新线程内存
new_thread = kzalloc(sizeof(*thread), GFP_KERNEL);
if (new_thread == NULL)
return NULL;
binder_inner_proc_lock(proc);
// 创建新线程
thread = binder_get_thread_ilocked(proc, new_thread);
binder_inner_proc_unlock(proc);
if (thread != new_thread)
kfree(new_thread);
}
return thread;
}
static struct binder_thread *binder_get_thread_ilocked(
struct binder_proc *proc, struct binder_thread *new_thread)
{
struct binder_thread *thread = NULL;
struct rb_node *parent = NULL;
struct rb_node **p = &proc->threads.rb_node;
// 遍历红黑树,获取红黑树中current->pid == thread->pid的节点
while (*p) {
parent = *p;
thread = rb_entry(parent, struct binder_thread, rb_node);
if (current->pid < thread->pid)
p = &(*p)->rb_left;
else if (current->pid > thread->pid)
p = &(*p)->rb_right;
else
return thread;
}
// 如果上面没有找到线程可用,并且不创建新线程,那么就返回NULL
if (!new_thread)
return NULL;
// 创建新线程,并添加到进程的红黑树中
thread = new_thread;
binder_stats_created(BINDER_STAT_THREAD);
thread->proc = proc;
thread->pid = current->pid;
atomic_set(&thread->tmp_ref, 0);
init_waitqueue_head(&thread->wait);
INIT_LIST_HEAD(&thread->todo);
rb_link_node(&thread->rb_node, parent, p);
rb_insert_color(&thread->rb_node, &proc->threads);
thread->looper_need_return = true;
thread->return_error.work.type = BINDER_WORK_RETURN_ERROR;
thread->return_error.cmd = BR_OK;
thread->reply_error.work.type = BINDER_WORK_RETURN_ERROR;
thread->reply_error.cmd = BR_OK;
thread->ee.command = BR_OK;
INIT_LIST_HEAD(&new_thread->waiting_thread_node);
return thread;
}
2.2 调用binder_ioctl_write_read处理服务端数据
static int binder_ioctl_write_read(struct file *filp, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
void __user *ubuf = (void __user *)arg; // 用户空间的数据
// 从用户空间获取服务发送的数据binder_write_read
struct binder_write_read bwr;
//从用户空间发送的数据头拷贝到内核空间(这部分内核空间被mmap映射到了目标进程)
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
binder_debug(BINDER_DEBUG_READ_WRITE,
"%d:%d write %lld at %016llx, read %lld at %016llx\n",
proc->pid, thread->pid,
(u64)bwr.write_size, (u64)bwr.write_buffer,
(u64)bwr.read_size, (u64)bwr.read_buffer);
// 上面已经分析过服务注册的数据保存在binder_write_read,此时它的write_size是大于0的
if (bwr.write_size > 0) { // 向驱动程序写数据
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
trace_binder_write_done(ret);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
if (bwr.read_size > 0) { // 从驱动程序读数据
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
binder_inner_proc_lock(proc);
if (!binder_worklist_empty_ilocked(&proc->todo))
binder_wakeup_proc_ilocked(proc);
binder_inner_proc_unlock(proc);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
binder_debug(BINDER_DEBUG_READ_WRITE,
"%d:%d wrote %lld of %lld, read return %lld of %lld\n",
proc->pid, thread->pid,
(u64)bwr.write_consumed, (u64)bwr.write_size,
(u64)bwr.read_consumed, (u64)bwr.read_size);
// 复制数据给到用户空间
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
static inline int copy_from_user(void *to, const void __user volatile *from,
unsigned long n)
{
volatile_memcpy(to, from, n);
return 0;
}
static inline int copy_to_user(void __user volatile *to, const void *from,
unsigned long n)
{
volatile_memcpy(to, from, n);
return 0;
}
2.3 分析binder_thread_write
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
struct binder_context *context = proc->context;
// 获取数据buffer,根据上面总结的发送数据可知,这个buffer由cmd和binder_transcation_data两部分数据组成
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
// 发送来的数据consumed=0,因此ptr指向用户空间数据buffer的起点
void __user *ptr = buffer + *consumed;
// 指向数据buffer的末尾
void __user *end = buffer + size;
// 逐个读取服务端发送来的数据(cmd+binder_transcation_data)
while (ptr < end && thread->return_error.cmd == BR_OK) {
int ret;
// 获取用户空间中buffer的cmd值
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
// 移动指针到cmd的位置之后,指向binder_transcation_data数据的内存起点
ptr += sizeof(uint32_t);
trace_binder_command(cmd);
if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {
atomic_inc(&binder_stats.bc[_IOC_NR(cmd)]);
atomic_inc(&proc->stats.bc[_IOC_NR(cmd)]);
atomic_inc(&thread->stats.bc[_IOC_NR(cmd)]);
}
// 根据上面总结的发送数据可知,cmd是BC_TRANSACTION
switch (cmd) {
......
/*
BC_TRANSACTION:进程发送信息的cmd
BR_TRANSACTION:进程接收BC_TRANSACTION发送信息的cmd
BC_REPLY:进程回复信息的cmd
BR_REPLY:进程接收BC_REPLY回复信息的cmd
*/
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
// 从用户空间拷贝binder_transaction_data到内核空间
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
// 移动指针到binder_transaction_data的位置之后,指向下一个cmd数据的内存起点
ptr += sizeof(tr);
// 处理binder_transaction_data数据
binder_transaction(proc, thread, &tr,
cmd == BC_REPLY, 0);
break;
}
}
}
......
}
int get_user(int *val, const int __user *ptr) {
if (copy_from_user(val, ptr, sizeof(int))) {
return -EFAULT; // 返回错误码
}
return 0; // 成功
}
2.4 分析binder_transaction
根据handle找到目的进程service_manager
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
......
// 此时是服务向内核发送数据,reply为false
if (reply) { // Binder内核驱动程序向用户空间回复数据的处理逻辑
......
} else { // 用户空间数据发送给内核空间的处理逻辑
//1. 找到目的进程,本次分析的是向service_manager注册服务,因此目的进程就是tr->target.handle=0的service_manager
if (tr->target.handle) { // tr->target.handle == 0 代表是service_manager进程,否则是其它进程
.......
} else { //处理service_manager进程
mutex_lock(&context->context_mgr_node_lock);
//这个node是在创建service_manager时通过BINDER_SET_CONTEXT_MGR的cmd创建的
// 这儿我有一点疑惑,因为context此时应该是服务端的上下文,那通过它拿到的binder_node应该也是服务端的binder_node呀,为什么就是service_manager的binder_node了呢?思考良久没有想通,暂时搁置,后面分析过程中再寻找答案。
target_node = context->binder_context_mgr_node;
if (target_node)
target_node = binder_get_node_refs_for_txn(
target_node, &target_proc,
&return_error);
else
return_error = BR_DEAD_REPLY;
mutex_unlock(&context->context_mgr_node_lock);
if (target_node && target_proc->pid == proc->pid) {
binder_user_error("%d:%d got transaction to context manager from process owning it\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_invalid_target_handle;
}
}
......
}
}
下面我们先分析下service_manager的binder_node和binder_proc都是什么时候创建的
2.5 service_manager的binder_node是什么时候创建?
上面文章深入内核讲明白Android Binder【一】,我们分析service_manager源码的时候知道,service_manager在运行时候会执行如下代码,告知Binder内核驱动程序它就是service_manager,而service_manager在内核驱动程序的binder_node对象就是在这个过程中创建的,下面我们来分析下源码。
int main(int argc, char **argv)
{
......
if (binder_become_context_manager(bs)) {//告诉驱动它是service_manager
ALOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
......
}
int binder_become_context_manager(struct binder_state *bs)
{
return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0);
}
// 用户空间的ioctl函数调用到Binder内核驱动程序binder.c中的binder_ioctl函数,这在上面也说过
// https://github.com/torvalds/linux/blob/master/drivers/android/binder.c
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
void __user *ubuf = (void __user *)arg;
/*pr_info("binder_ioctl: %d:%d %x %lx\n",
proc->pid, current->pid, cmd, arg);*/
binder_selftest_alloc(&proc->alloc);
trace_binder_ioctl(cmd, arg);
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
goto err_unlocked;
//为进程proc创建binder_thread
thread = binder_get_thread(proc);
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
......
case BINDER_SET_CONTEXT_MGR:
ret = binder_ioctl_set_ctx_mgr(filp, NULL);
if (ret)
goto err;
break;
......
}
static int binder_ioctl_set_ctx_mgr(struct file *filp,
struct flat_binder_object *fbo)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
struct binder_context *context = proc->context;
struct binder_node *new_node;
kuid_t curr_euid = current_euid();
mutex_lock(&context->context_mgr_node_lock);
if (context->binder_context_mgr_node) {
pr_err("BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto out;
}
ret = security_binder_set_context_mgr(proc->cred);
if (ret < 0)
goto out;
if (uid_valid(context->binder_context_mgr_uid)) {
if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
from_kuid(&init_user_ns, curr_euid),
from_kuid(&init_user_ns,
context->binder_context_mgr_uid));
ret = -EPERM;
goto out;
}
} else {
context->binder_context_mgr_uid = curr_euid;
}
//创建service_manager的binder_node节点
new_node = binder_new_node(proc, fbo);
if (!new_node) {
ret = -ENOMEM;
goto out;
}
binder_node_lock(new_node);
new_node->local_weak_refs++;
new_node->local_strong_refs++;
new_node->has_strong_ref = 1;
new_node->has_weak_ref = 1;
//将service_manager进程的上下文指向binder_node节点
context->binder_context_mgr_node = new_node;
binder_node_unlock(new_node);
binder_put_node(new_node);
out:
mutex_unlock(&context->context_mgr_node_lock);
return ret;
}
static struct binder_node *binder_new_node(struct binder_proc *proc,
struct flat_binder_object *fp)
{
struct binder_node *node;
struct binder_node *new_node = kzalloc(sizeof(*node), GFP_KERNEL);
if (!new_node)
return NULL;
binder_inner_proc_lock(proc);
//创建binder_node,并加入service_manager进程的红黑树中
node = binder_init_node_ilocked(proc, new_node, fp);
binder_inner_proc_unlock(proc);
if (node != new_node)
/*
* The node was already added by another thread
*/
kfree(new_node);
return node;
}
static struct binder_node *binder_init_node_ilocked(
struct binder_proc *proc,
struct binder_node *new_node,
struct flat_binder_object *fp)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent = NULL;
struct binder_node *node;
// service_manager的fp是null
binder_uintptr_t ptr = fp ? fp->binder : 0;
binder_uintptr_t cookie = fp ? fp->cookie : 0;
__u32 flags = fp ? fp->flags : 0;
assert_spin_locked(&proc->inner_lock);
// 遍历红黑树
while (*p) {
parent = *p;
// 转换为binder_node
node = rb_entry(parent, struct binder_node, rb_node);
// 找到将新binder_node插入红黑树中的位置
if (ptr < node->ptr)
p = &(*p)->rb_left;
else if (ptr > node->ptr)
p = &(*p)->rb_right;
else {
/*
* A matching node is already in
* the rb tree. Abandon the init
* and return it.
*/
binder_inc_node_tmpref_ilocked(node);
return node;
}
}
// 设置新节点的属性
node = new_node;
binder_stats_created(BINDER_STAT_NODE);
node->tmp_refs++;
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);
node->debug_id = atomic_inc_return(&binder_last_id);
node->proc = proc;
node->ptr = ptr;
node->cookie = cookie;
node->work.type = BINDER_WORK_NODE;//binder_work类型
node->min_priority = flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
node->accept_fds = !!(flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
node->txn_security_ctx = !!(flags & FLAT_BINDER_FLAG_TXN_SECURITY_CTX);
spin_lock_init(&node->lock);
INIT_LIST_HEAD(&node->work.entry);
INIT_LIST_HEAD(&node->async_todo);
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"%d:%d node %d u%016llx c%016llx created\n",
proc->pid, current->pid, node->debug_id,
(u64)node->ptr, (u64)node->cookie);
return node;
}
#define rb_entry(ptr, type, member) container_of(ptr, type, member)
// 从结构体中的成员指针获取到包含该成员的结构体指针
#define container_of(ptr, type, member) ({ \
void *__mptr = (void *)(ptr); \
......
((type *)(__mptr - offsetof(type, member))); })
2.6 service_manager的binder_proc是什么时候创建?
上面文章深入内核讲明白Android Binder【一】,我们分析service_manager源码的时候知道,执行service_manager之前要先打开驱动,而binder_proc对象就是在打开驱动的时候创建的。使用Binder通信的应用,都需要打开驱动,也都是在打开驱动时就在内核创建了binder_proc对象。
int main(int argc, char **argv)
{
struct binder_state *bs;
bs = binder_open(128*1024);
......
}
struct binder_state *binder_open(size_t mapsize)
{
struct binder_state *bs;
struct binder_version vers;
bs = malloc(sizeof(*bs));
if (!bs) {
errno = ENOMEM;
return NULL;
}
// 打开binder驱动
bs->fd = open("/dev/binder", O_RDWR);
......
}
// 应用层调用open函数,最终调用到Binder内核驱动程序binder.c中的binder_open函数
// // https://github.com/torvalds/linux/blob/master/drivers/android/binder.c
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc, *itr;
struct binder_device *binder_dev;
struct binderfs_info *info;
struct dentry *binder_binderfs_dir_entry_proc = NULL;
bool existing_pid = false;
binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__,
current->group_leader->pid, current->pid);
//创建binder_proc
proc = kzalloc(sizeof(*proc), GFP_KERNEL);
if (proc == NULL)
return -ENOMEM;
spin_lock_init(&proc->inner_lock);
spin_lock_init(&proc->outer_lock);
get_task_struct(current->group_leader);
proc->tsk = current->group_leader;
proc->cred = get_cred(filp->f_cred);
INIT_LIST_HEAD(&proc->todo);
init_waitqueue_head(&proc->freeze_wait);
proc->default_priority = task_nice(current);
/* binderfs stashes devices in i_private */
if (is_binderfs_device(nodp)) {
binder_dev = nodp->i_private;
info = nodp->i_sb->s_fs_info;
binder_binderfs_dir_entry_proc = info->proc_log_dir;
} else {
binder_dev = container_of(filp->private_data,
struct binder_device, miscdev);
}
refcount_inc(&binder_dev->ref);
proc->context = &binder_dev->context;
binder_alloc_init(&proc->alloc);
binder_stats_created(BINDER_STAT_PROC);
proc->pid = current->group_leader->pid;
INIT_LIST_HEAD(&proc->delivered_death);
INIT_LIST_HEAD(&proc->waiting_threads);
filp->private_data = proc;
mutex_lock(&binder_procs_lock);
hlist_for_each_entry(itr, &binder_procs, proc_node) {
if (itr->pid == proc->pid) {
existing_pid = true;
break;
}
}
hlist_add_head(&proc->proc_node, &binder_procs);
mutex_unlock(&binder_procs_lock);
if (binder_debugfs_dir_entry_proc && !existing_pid) {
char strbuf[11];
snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
/*
* proc debug entries are shared between contexts.
* Only create for the first PID to avoid debugfs log spamming
* The printing code will anyway print all contexts for a given
* PID so this is not a problem.
*/
proc->debugfs_entry = debugfs_create_file(strbuf, 0444,
binder_debugfs_dir_entry_proc,
(void *)(unsigned long)proc->pid,
&proc_fops);
}
if (binder_binderfs_dir_entry_proc && !existing_pid) {
char strbuf[11];
struct dentry *binderfs_entry;
snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
/*
* Similar to debugfs, the process specific log file is shared
* between contexts. Only create for the first PID.
* This is ok since same as debugfs, the log file will contain
* information on all contexts of a given PID.
*/
binderfs_entry = binderfs_create_file(binder_binderfs_dir_entry_proc,
strbuf, &proc_fops, (void *)(unsigned long)proc->pid);
if (!IS_ERR(binderfs_entry)) {
proc->binderfs_entry = binderfs_entry;
} else {
int error;
error = PTR_ERR(binderfs_entry);
pr_warn("Unable to create file %s in binderfs (error %d)\n",
strbuf, error);
}
}
return 0;
}
拿到service_manager的binder_node后,下面我们接着分析2.4节的binder_transaction
2.7 接着2.4节的binder_transaction继续分析
通过copy_from_user把服务端组织的数据(data.ptr.offsets,存储服务端写入实际数据的内存起点)放到目的进程service_manager mmap的内存空间。关于服务端发送数据的组织形式,上面已经给过图示了,考虑文章太长,这里再重复贴一张,便于数据理解。
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
int ret;
struct binder_transaction *t;
struct binder_work *w;
struct binder_work *tcomplete;
binder_size_t buffer_offset = 0;
binder_size_t off_start_offset, off_end_offset;
binder_size_t off_min;
binder_size_t sg_buf_offset, sg_buf_end_offset;
binder_size_t user_offset = 0;
struct binder_proc *target_proc = NULL;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error = 0;
uint32_t return_error_param = 0;
uint32_t return_error_line = 0;
binder_size_t last_fixup_obj_off = 0;
binder_size_t last_fixup_min_off = 0;
struct binder_context *context = proc->context;
int t_debug_id = atomic_inc_return(&binder_last_id);
ktime_t t_start_time = ktime_get();
char *secctx = NULL;
u32 secctx_sz = 0;
struct list_head sgc_head;
struct list_head pf_head;
const void __user *user_buffer = (const void __user *)
(uintptr_t)tr->data.ptr.buffer;
INIT_LIST_HEAD(&sgc_head);
INIT_LIST_HEAD(&pf_head);
e = binder_transaction_log_add(&binder_transaction_log);
e->debug_id = t_debug_id;
e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
e->from_proc = proc->pid;
e->from_thread = thread->pid;
e->target_handle = tr->target.handle;
e->data_size = tr->data_size;
e->offsets_size = tr->offsets_size;
strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);
binder_inner_proc_lock(proc);
binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);
binder_inner_proc_unlock(proc);
if (reply) {// 找到要回复的进程
......
} else {// 1. 找到要发送的目的进程
if (tr->target.handle) { // 目的进程非service_manager进程
.....
} else { //目的进程是service_manager进程
// 找到service_manager的binder_node节点
.....
}
......
}
if (target_thread)
e->to_thread = target_thread->pid;
e->to_proc = target_proc->pid;
/* TODO: reuse incoming transaction for reply */
// 为binder_transcation分配内存
t = kzalloc(sizeof(*t), GFP_KERNEL);
.....
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
// 存储发送双方的基本信息
t->from_pid = proc->pid;
t->from_tid = thread->pid;
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
t->flags = tr->flags;
t->priority = task_nice(current);
......
t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
tr->offsets_size, extra_buffers_size,
!reply && (t->flags & TF_ONE_WAY));
......
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);
trace_binder_transaction_alloc_buf(t->buffer);
// 把服务端的数据拷贝到目的进程service_manager mmap的内存空间,即t->buffer指向的内存空间
if (binder_alloc_copy_user_to_buffer(
&target_proc->alloc,
t->buffer,
ALIGN(tr->data_size, sizeof(void *)),
(const void __user *)
(uintptr_t)tr->data.ptr.offsets,
tr->offsets_size)) {
binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EFAULT;
return_error_line = __LINE__;
goto err_copy_data_failed;
}
......
}
/**
* binder_alloc_copy_user_to_buffer() - copy src user to tgt user
* @alloc: binder_alloc for this proc
* @buffer: binder buffer to be accessed
* @buffer_offset: offset into @buffer data
* @from: userspace pointer to source buffer
* @bytes: bytes to copy
*
* Copy bytes from source userspace to target buffer.
*
* Return: bytes remaining to be copied
*/
unsigned long
binder_alloc_copy_user_to_buffer(struct binder_alloc *alloc,
struct binder_buffer *buffer,
binder_size_t buffer_offset,
const void __user *from,
size_t bytes)
{
if (!check_buffer(alloc, buffer, buffer_offset, bytes))
return bytes;
while (bytes) {
unsigned long size;
unsigned long ret;
struct page *page;
pgoff_t pgoff;
void *kptr;
page = binder_alloc_get_page(alloc, buffer,
buffer_offset, &pgoff);
size = min_t(size_t, bytes, PAGE_SIZE - pgoff);
kptr = kmap_local_page(page) + pgoff;
// 拷贝服务端数据到service_manager mmap的内核内存空间
ret = copy_from_user(kptr, from, size);
kunmap_local(kptr);
if (ret)
return bytes - size + ret;
bytes -= size;
from += size;
buffer_offset += size;
}
return 0;
}
2.8 继续分析binder_transaction
处理server传入的offsets数据,这个数据指向用于构建binder_node实体的flat_binder_object,flat_binder_object中存储了服务端提供的服务函数的指针,这个很关键,因为服务端就是为了让客户端使用它这个函数。
- 把服务端发送来的flat_binder_object拷贝到内核驱动程序
- 在内核态为server构造一个binder_node
- 在内核态构造binder_ref给service_manager进程
- 增加引用计数,会返回一些信息给当前进程,当前进程就知道被引用了
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
int ret;
struct binder_transaction *t;
struct binder_work *w;
struct binder_work *tcomplete;
binder_size_t buffer_offset = 0;
binder_size_t off_start_offset, off_end_offset;
binder_size_t off_min;
binder_size_t sg_buf_offset, sg_buf_end_offset;
binder_size_t user_offset = 0;
struct binder_proc *target_proc = NULL;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error = 0;
uint32_t return_error_param = 0;
uint32_t return_error_line = 0;
binder_size_t last_fixup_obj_off = 0;
binder_size_t last_fixup_min_off = 0;
struct binder_context *context = proc->context;
int t_debug_id = atomic_inc_return(&binder_last_id);
ktime_t t_start_time = ktime_get();
char *secctx = NULL;
u32 secctx_sz = 0;
struct list_head sgc_head;
struct list_head pf_head;
// 服务端的数据buffer
const void __user *user_buffer = (const void __user *)
(uintptr_t)tr->data.ptr.buffer;
INIT_LIST_HEAD(&sgc_head);
INIT_LIST_HEAD(&pf_head);
e = binder_transaction_log_add(&binder_transaction_log);
e->debug_id = t_debug_id;
e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
e->from_proc = proc->pid;
e->from_thread = thread->pid;
e->target_handle = tr->target.handle;
e->data_size = tr->data_size;
e->offsets_size = tr->offsets_size;
strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);
binder_inner_proc_lock(proc);
binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);
binder_inner_proc_unlock(proc);
if (reply) {// 找到要回复的进程
......
} else {// 1. 找到要发送的目的进程
if (tr->target.handle) { // 目的进程非service_manager进程
.....
} else { //目的进程是service_manager进程
// 找到service_manager的binder_node节点
.....
}
......
}
if (target_thread)
e->to_thread = target_thread->pid;
e->to_proc = target_proc->pid;
/* TODO: reuse incoming transaction for reply */
// 为binder_transcation分配内存
t = kzalloc(sizeof(*t), GFP_KERNEL);
.....
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
// 存储发送双方的基本信息
t->from_pid = proc->pid;
t->from_tid = thread->pid;
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
t->flags = tr->flags;
t->priority = task_nice(current);
......
t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
tr->offsets_size, extra_buffers_size,
!reply && (t->flags & TF_ONE_WAY));
......
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);
trace_binder_transaction_alloc_buf(t->buffer);
// 把服务端的offsets数据拷贝到目的进程service_manager mmap的内存空间
if (binder_alloc_copy_user_to_buffer(
&target_proc->alloc,
t->buffer,
ALIGN(tr->data_size, sizeof(void *)),
(const void __user *)
(uintptr_t)tr->data.ptr.offsets,
tr->offsets_size)) {
binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EFAULT;
return_error_line = __LINE__;
goto err_copy_data_failed;
}
......
off_start_offset = ALIGN(tr->data_size, sizeof(void *));
buffer_offset = off_start_offset;
off_end_offset = off_start_offset + tr->offsets_size;
sg_buf_offset = ALIGN(off_end_offset, sizeof(void *));
sg_buf_end_offset = sg_buf_offset + extra_buffers_size -
ALIGN(secctx_sz, sizeof(u64));
off_min = 0;
//处理server传入的offsets数据,这个数据指向用于构建binder_node实体的flat_binder_object
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
struct binder_object_header *hdr;
size_t object_size;
struct binder_object object;
binder_size_t object_offset;
binder_size_t copy_size;
// 从t->buffer拷贝数据到object_offset
if (binder_alloc_copy_from_buffer(&target_proc->alloc,
&object_offset,
t->buffer,
buffer_offset,
sizeof(object_offset))) {
binder_txn_error("%d:%d copy offset from buffer failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_bad_offset;
}
/*
* Copy the source user buffer up to the next object
* that will be processed.
*/
copy_size = object_offset - user_offset;
if (copy_size && (user_offset > object_offset ||
binder_alloc_copy_user_to_buffer(
&target_proc->alloc,
t->buffer, user_offset,
user_buffer + user_offset,
copy_size))) {
binder_user_error("%d:%d got transaction with invalid data ptr\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EFAULT;
return_error_line = __LINE__;
goto err_copy_data_failed;
}
// 把服务端的flat_binder_object拷贝到object指向的内核内存空间
object_size = binder_get_object(target_proc, user_buffer,
t->buffer, object_offset, &object);
......
/*
* Set offset to the next buffer fragment to be
* copied
*/
user_offset = object_offset + object_size;
hdr = &object.hdr;
off_min = object_offset + object_size;
switch (hdr->type) {
//处理binder实体
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct flat_binder_object *fp;
// 拿到服务端发送来的flat_binder_object数据
fp = to_flat_binder_object(hdr);
// 根据服务端的flat_binder_object创建binder_node
ret = binder_translate_binder(fp, t, thread);
if (ret < 0 ||
binder_alloc_copy_to_buffer(&target_proc->alloc,
t->buffer,
object_offset,
fp, sizeof(*fp))) {
binder_txn_error("%d:%d translate binder failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = ret;
return_error_line = __LINE__;
goto err_translate_failed;
}
} break;
//处理binder引用
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
......
} break;
......
default:
binder_user_error("%d:%d got transaction with invalid object type, %x\n",
proc->pid, thread->pid, hdr->type);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_bad_object_type;
}
}
......
}
/**
* binder_get_object() - gets object and checks for valid metadata
* @proc: binder_proc owning the buffer
* @u: sender's user pointer to base of buffer
* @buffer: binder_buffer that we're parsing.
* @offset: offset in the @buffer at which to validate an object.
* @object: struct binder_object to read into
*
* Copy the binder object at the given offset into @object. If @u is
* provided then the copy is from the sender's buffer. If not, then
* it is copied from the target's @buffer.
*
* Return: If there's a valid metadata object at @offset, the
* size of that object. Otherwise, it returns zero. The object
* is read into the struct binder_object pointed to by @object.
*/
static size_t binder_get_object(struct binder_proc *proc,
const void __user *u,
struct binder_buffer *buffer,
unsigned long offset,
struct binder_object *object)
{
size_t read_size;
struct binder_object_header *hdr;
size_t object_size = 0;
read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset);
if (offset > buffer->data_size || read_size < sizeof(*hdr) ||
!IS_ALIGNED(offset, sizeof(u32)))
return 0;
if (u) {
/* 为什么u + offset指向的地址是服务端发送的flat_binder_object的地址?
可以从上一篇文章分析过服务端调用bio_put_obj函数的源码处得到答案
*bio->offs++ = ((char*) obj) - ((char*) bio->data0); //指向obj(相对位置,只是为了便于理解)
*/
if (copy_from_user(object, u + offset, read_size))
return 0;
} else {
if (binder_alloc_copy_from_buffer(&proc->alloc, object, buffer,
offset, read_size))
return 0;
}
/* Ok, now see if we read a complete object. */
hdr = &object->hdr;
switch (hdr->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER:
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE:
object_size = sizeof(struct flat_binder_object);
break;
case BINDER_TYPE_FD:
object_size = sizeof(struct binder_fd_object);
break;
case BINDER_TYPE_PTR:
object_size = sizeof(struct binder_buffer_object);
break;
case BINDER_TYPE_FDA:
object_size = sizeof(struct binder_fd_array_object);
break;
default:
return 0;
}
if (offset <= buffer->data_size - object_size &&
buffer->data_size >= object_size)
return object_size;
else
return 0;
}
2.9 分析binder_translate_binder
根据服务端的flat_binder_object,在内核创建binder_node,并创建binder_ref指向刚刚创建的binder_node,以便后续使用。
static int binder_translate_binder(struct flat_binder_object *fp,
struct binder_transaction *t,
struct binder_thread *thread)
{
struct binder_node *node;
struct binder_proc *proc = thread->proc;
struct binder_proc *target_proc = t->to_proc;
struct binder_ref_data rdata;
int ret = 0;
// 先从红黑树中寻找fp->binder
node = binder_get_node(proc, fp->binder);
// 如果找不到,则在内核为服务端创建binder_node
if (!node) {
//为server创建binder_node,保存服务端为客户端提供的实现函数
node = binder_new_node(proc, fp);
if (!node)
return -ENOMEM;
}
......
//为目的进程创建binder_ref,指向server的binder_node
ret = binder_inc_ref_for_node(target_proc, node,
fp->hdr.type == BINDER_TYPE_BINDER,
&thread->todo, &rdata);
if (ret)
goto done;
//目前数据已经拷贝到了目的进程,如果提供数据的server的binder是实体,
//那么对应目的进程中binder的类型要修改为引用,目的进程只是引用服务的binder实体
if (fp->hdr.type == BINDER_TYPE_BINDER)
fp->hdr.type = BINDER_TYPE_HANDLE;
else
fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;
fp->binder = 0;
/* 设置handle作为binder引用,通过这个handle可以找到对应的binder_ref,
通过binder_ref找到提供服务的binder_node */
fp->handle = rdata.desc;
fp->cookie = 0;
trace_binder_transaction_node_to_ref(t, node, &rdata);
binder_debug(BINDER_DEBUG_TRANSACTION,
" node %d u%016llx -> ref %d desc %d\n",
node->debug_id, (u64)node->ptr,
rdata.debug_id, rdata.desc);
done:
binder_put_node(node);
return ret;
}
static int binder_inc_ref_for_node(struct binder_proc *proc,
struct binder_node *node,
bool strong,
struct list_head *target_list,
struct binder_ref_data *rdata)
{
struct binder_ref *ref;
struct binder_ref *new_ref = NULL;
int ret = 0;
binder_proc_lock(proc);
// 先从红黑树中查找binder_ref
ref = binder_get_ref_for_node_olocked(proc, node, NULL);
// 如果不存在,则创建
if (!ref) {
binder_proc_unlock(proc);
new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
if (!new_ref)
return -ENOMEM;
binder_proc_lock(proc);
// 创建binder_ref
ref = binder_get_ref_for_node_olocked(proc, node, new_ref);
}
//增加引用计数
ret = binder_inc_ref_olocked(ref, strong, target_list);
*rdata = ref->data;
if (ret && ref == new_ref) {
/*
* Cleanup the failed reference here as the target
* could now be dead and have already released its
* references by now. Calling on the new reference
* with strong=0 and a tmp_refs will not decrement
* the node. The new_ref gets kfree'd below.
*/
binder_cleanup_ref_olocked(new_ref);
ref = NULL;
}
binder_proc_unlock(proc);
if (new_ref && ref != new_ref)
/*
* Another thread created the ref first so
* free the one we allocated
*/
kfree(new_ref);
return ret;
}
static struct binder_ref *binder_get_ref_for_node_olocked(
struct binder_proc *proc,
struct binder_node *node,
struct binder_ref *new_ref)
{
struct binder_context *context = proc->context;
struct rb_node **p = &proc->refs_by_node.rb_node;
struct rb_node *parent = NULL;
struct binder_ref *ref;
struct rb_node *n;
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
if (node < ref->node)
p = &(*p)->rb_left;
else if (node > ref->node)
p = &(*p)->rb_right;
else
return ref;
}
if (!new_ref)
return NULL;
binder_stats_created(BINDER_STAT_REF);
new_ref->data.debug_id = atomic_inc_return(&binder_last_id);
new_ref->proc = proc;
// binder_ref的node节点指向服务端的binder_node
new_ref->node = node;
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);
/* 更新binder_ref中data.desc的值,这个数值代表binder_node,
服务端要通过这个数值,找到binder_ref,如果node节点是service_manager,desc等于0,否则为1
*/
new_ref->data.desc = (node == context->binder_context_mgr_node) ? 0 : 1;
for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
ref = rb_entry(n, struct binder_ref, rb_node_desc);
if (ref->data.desc > new_ref->data.desc)
break;
// 更新desc的值
new_ref->data.desc = ref->data.desc + 1;
}
// 更新红黑树
......
return new_ref;
}
2.10 继续分析binder_transaction
把数据放到目的进程的todo链表,然后唤醒目的进程service_manager
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
int ret;
struct binder_transaction *t;
struct binder_work *w;
struct binder_work *tcomplete;
binder_size_t buffer_offset = 0;
binder_size_t off_start_offset, off_end_offset;
binder_size_t off_min;
binder_size_t sg_buf_offset, sg_buf_end_offset;
binder_size_t user_offset = 0;
struct binder_proc *target_proc = NULL;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error = 0;
uint32_t return_error_param = 0;
uint32_t return_error_line = 0;
binder_size_t last_fixup_obj_off = 0;
binder_size_t last_fixup_min_off = 0;
struct binder_context *context = proc->context;
int t_debug_id = atomic_inc_return(&binder_last_id);
ktime_t t_start_time = ktime_get();
char *secctx = NULL;
u32 secctx_sz = 0;
struct list_head sgc_head;
struct list_head pf_head;
// 服务端的数据buffer
const void __user *user_buffer = (const void __user *)
(uintptr_t)tr->data.ptr.buffer;
INIT_LIST_HEAD(&sgc_head);
INIT_LIST_HEAD(&pf_head);
e = binder_transaction_log_add(&binder_transaction_log);
e->debug_id = t_debug_id;
e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
e->from_proc = proc->pid;
e->from_thread = thread->pid;
e->target_handle = tr->target.handle;
e->data_size = tr->data_size;
e->offsets_size = tr->offsets_size;
strscpy(e->context_name, proc->context->name, BINDERFS_MAX_NAME);
binder_inner_proc_lock(proc);
binder_set_extended_error(&thread->ee, t_debug_id, BR_OK, 0);
binder_inner_proc_unlock(proc);
if (reply) {// 找到要回复的进程
......
} else {// 1. 找到要发送的目的进程
if (tr->target.handle) { // 目的进程非service_manager进程
.....
} else { //目的进程是service_manager进程
// 找到service_manager的binder_node节点
.....
}
......
}
if (target_thread)
e->to_thread = target_thread->pid;
e->to_proc = target_proc->pid;
/* TODO: reuse incoming transaction for reply */
// 为binder_transcation分配内存
t = kzalloc(sizeof(*t), GFP_KERNEL);
.....
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
// 存储发送双方的基本信息
t->from_pid = proc->pid;
t->from_tid = thread->pid;
t->sender_euid = task_euid(proc->tsk);
t->to_proc = target_proc;
t->to_thread = target_thread;
t->code = tr->code;
t->flags = tr->flags;
t->priority = task_nice(current);
......
t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,
tr->offsets_size, extra_buffers_size,
!reply && (t->flags & TF_ONE_WAY));
......
t->buffer->debug_id = t->debug_id;
t->buffer->transaction = t;
t->buffer->target_node = target_node;
t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);
trace_binder_transaction_alloc_buf(t->buffer);
// 把服务端的offsets数据拷贝到目的进程service_manager mmap的内存空间
if (binder_alloc_copy_user_to_buffer(
&target_proc->alloc,
t->buffer,
ALIGN(tr->data_size, sizeof(void *)),
(const void __user *)
(uintptr_t)tr->data.ptr.offsets,
tr->offsets_size)) {
binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EFAULT;
return_error_line = __LINE__;
goto err_copy_data_failed;
}
......
off_start_offset = ALIGN(tr->data_size, sizeof(void *));
buffer_offset = off_start_offset;
off_end_offset = off_start_offset + tr->offsets_size;
sg_buf_offset = ALIGN(off_end_offset, sizeof(void *));
sg_buf_end_offset = sg_buf_offset + extra_buffers_size -
ALIGN(secctx_sz, sizeof(u64));
off_min = 0;
//处理server传入的offsets数据,这个数据指向用于构建binder_node实体的flat_binder_object
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
struct binder_object_header *hdr;
size_t object_size;
struct binder_object object;
binder_size_t object_offset;
binder_size_t copy_size;
// 从t->buffer拷贝数据到object_offset
if (binder_alloc_copy_from_buffer(&target_proc->alloc,
&object_offset,
t->buffer,
buffer_offset,
sizeof(object_offset))) {
binder_txn_error("%d:%d copy offset from buffer failed\n",
thread->pid, proc->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_bad_offset;
}
/*
* Copy the source user buffer up to the next object
* that will be processed.
*/
copy_size = object_offset - user_offset;
if (copy_size && (user_offset > object_offset ||
binder_alloc_copy_user_to_buffer(
&target_proc->alloc,
t->buffer, user_offset,
user_buffer + user_offset,
copy_size))) {
binder_user_error("%d:%d got transaction with invalid data ptr\n",
proc->pid, thread->pid);
return_error = BR_FAILED_REPLY;
return_error_param = -EFAULT;
return_error_line = __LINE__;
goto err_copy_data_failed;
}
// 把服务端的flat_binder_object拷贝到object指向的内核内存空间
object_size = binder_get_object(target_proc, user_buffer,
t->buffer, object_offset, &object);
......
/*
* Set offset to the next buffer fragment to be
* copied
*/
user_offset = object_offset + object_size;
hdr = &object.hdr;
off_min = object_offset + object_size;
switch (hdr->type) {
//处理binder实体
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct flat_binder_object *fp;
// 拿到服务端发送来的flat_binder_object数据
fp = to_flat_binder_object(hdr);
// 根据服务端的flat_binder_object创建binder_node
ret = binder_translate_binder(fp, t, thread);
......
}
} break;
//处理binder引用
case BINDER_TYPE_HANDLE:
case BINDER_TYPE_WEAK_HANDLE: {
......
} break;
......
default:
binder_user_error("%d:%d got transaction with invalid object type, %x\n",
proc->pid, thread->pid, hdr->type);
return_error = BR_FAILED_REPLY;
return_error_param = -EINVAL;
return_error_line = __LINE__;
goto err_bad_object_type;
}
}
......
// 唤醒目的进程service_manager
if (t->buffer->oneway_spam_suspect)
tcomplete->type = BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT;
else
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
t->work.type = BINDER_WORK_TRANSACTION;
if (reply) {
......
} else if (!(t->flags & TF_ONE_WAY)) {
......
} else {
......
binder_enqueue_thread_work(thread, tcomplete);
......
}
......
}
static void binder_enqueue_thread_work(struct binder_thread *thread,
struct binder_work *work)
{
binder_inner_proc_lock(thread->proc);
binder_enqueue_thread_work_ilocked(thread, work);
binder_inner_proc_unlock(thread->proc);
}
/**
* binder_enqueue_thread_work_ilocked() - Add an item to the thread work list
* @thread: thread to queue work to
* @work: struct binder_work to add to list
*
* Adds the work to the todo list of the thread, and enables processing
* of the todo queue.
*
* Requires the proc->inner_lock to be held.
*/
static void binder_enqueue_thread_work_ilocked(struct binder_thread *thread,
struct binder_work *work)
{
WARN_ON(!list_empty(&thread->waiting_thread_node));
//把work添加到目的进程的todo链表
binder_enqueue_work_ilocked(work, &thread->todo);
/* (e)poll-based threads require an explicit wakeup signal when
* queuing their own work; they rely on these events to consume
* messages without I/O block. Without it, threads risk waiting
* indefinitely without handling the work.
*/
if (thread->looper & BINDER_LOOPER_STATE_POLL &&
thread->pid == current->pid && !thread->process_todo)
// 唤醒目的进程service_manager
wake_up_interruptible_sync(&thread->wait);
thread->process_todo = true;
}
static void binder_enqueue_work_ilocked(struct binder_work *work,
struct list_head *target_list)
{
BUG_ON(target_list == NULL);
BUG_ON(work->entry.next && !list_empty(&work->entry));
//把数据添加到链表中
list_add_tail(&work->entry, target_list);
}
3. service_manager被唤醒,接着分析service_manager的源码
3.1 service_manager等待数据
int main(int argc, char **argv)
{
struct binder_state *bs;
bs = binder_open(128*1024);
if (!bs) {
ALOGE("failed to open binder driver\n");
return -1;
}
if (binder_become_context_manager(bs)) {
ALOGE("cannot become context manager (%s)\n", strerror(errno));
return -1;
}
svcmgr_handle = BINDER_SERVICE_MANAGER;
binder_loop(bs, svcmgr_handler);
return 0;
}
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(uint32_t));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
// 发起读操作
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
// res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);进入binder驱动程序
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
void __user *ubuf = (void __user *)arg;
/*pr_info("binder_ioctl: %d:%d %x %lx\n",
proc->pid, current->pid, cmd, arg);*/
binder_selftest_alloc(&proc->alloc);
trace_binder_ioctl(cmd, arg);
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret)
goto err_unlocked;
//为进程proc创建binder_thread
thread = binder_get_thread(proc);
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_WRITE_READ:
ret = binder_ioctl_write_read(filp, arg, thread);
if (ret)
goto err;
break;
......
}
......
}
static int binder_ioctl_write_read(struct file *filp, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
//从用户空间拷贝数据到内核空间(这部分内核空间被mmap映射到了目标进程)
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
binder_debug(BINDER_DEBUG_READ_WRITE,
"%d:%d write %lld at %016llx, read %lld at %016llx\n",
proc->pid, thread->pid,
(u64)bwr.write_size, (u64)bwr.write_buffer,
(u64)bwr.read_size, (u64)bwr.read_buffer);
if (bwr.write_size > 0) {
......
}
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
binder_inner_proc_lock(proc);
if (!binder_worklist_empty_ilocked(&proc->todo))
binder_wakeup_proc_ilocked(proc);
binder_inner_proc_unlock(proc);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
binder_debug(BINDER_DEBUG_READ_WRITE,
"%d:%d wrote %lld of %lld, read return %lld of %lld\n",
proc->pid, thread->pid,
(u64)bwr.write_consumed, (u64)bwr.write_size,
(u64)bwr.read_consumed, (u64)bwr.read_size);
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
3.2 binder_thread_read读取数据
- 读取数据时,先向用户空间的buffer写入BR_NOOP
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
if (*consumed == 0) {
//先向用户空间写BR_NOOP,因此用户空间的数据头部都是BR_NOOP
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
}
......
}
- 检测是否有数据可读取,若没有数据可读,就休眠,等待发送数据端的唤醒
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))//对于所有的读操作,数据头部都是BR_NOOP
return -EFAULT;
ptr += sizeof(uint32_t);
}
retry:
binder_inner_proc_lock(proc);
// 获取是否有数据在等待
wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);
binder_inner_proc_unlock(proc);
thread->looper |= BINDER_LOOPER_STATE_WAITING;
trace_binder_wait_for_work(wait_for_proc_work,
!!thread->transaction_stack,
!binder_worklist_empty(proc, &thread->todo));
if (wait_for_proc_work) {
if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
BINDER_LOOPER_STATE_ENTERED))) {
binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n",
proc->pid, thread->pid, thread->looper);
wait_event_interruptible(binder_user_error_wait,
binder_stop_on_user_error < 2);
}
binder_set_nice(proc->default_priority);
}
//没有数据就休眠
if (non_block) {
if (!binder_has_work(thread, wait_for_proc_work))
ret = -EAGAIN;
} else {
ret = binder_wait_for_work(thread, wait_for_proc_work);
}
thread->looper &= ~BINDER_LOOPER_STATE_WAITING;
}
- 收到数据,被唤醒
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
......
while (1) {
uint32_t cmd;
struct binder_transaction_data_secctx tr;
struct binder_transaction_data *trd = &tr.transaction_data;
struct binder_work *w = NULL;
struct list_head *list = NULL;
struct binder_transaction *t = NULL;
struct binder_thread *t_from;
size_t trsize = sizeof(*trd);
binder_inner_proc_lock(proc);
//如果proc的thread->todo链表有数据,拿到链表数据
if (!binder_worklist_empty_ilocked(&thread->todo))
list = &thread->todo;
//如果proc->todo链表有数据,拿到链表数据
else if (!binder_worklist_empty_ilocked(&proc->todo) &&
wait_for_proc_work)
list = &proc->todo;
else {
binder_inner_proc_unlock(proc);
/* no data added */
if (ptr - buffer == 4 && !thread->looper_need_return)
goto retry;
break;
}
if (end - ptr < sizeof(tr) + 4) {
binder_inner_proc_unlock(proc);
break;
}
w = binder_dequeue_work_head_ilocked(list);
if (binder_worklist_empty_ilocked(&thread->todo))
thread->process_todo = false;
//逐个处理相关类型的数据,server唤醒service_manager,将数据添加到链表时,binder_work.type是BINDER_WORK_TRANSACTION
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
binder_inner_proc_unlock(proc);
//获得binder_transaction
t = container_of(w, struct binder_transaction, work);
} break;
......
}
......
//处理数据
if (t->buffer->target_node) {
struct binder_node *target_node = t->buffer->target_node;
trd->target.ptr = target_node->ptr;
trd->cookie = target_node->cookie;
t->saved_priority = task_nice(current);
if (t->priority < target_node->min_priority &&
!(t->flags & TF_ONE_WAY))
binder_set_nice(t->priority);
else if (!(t->flags & TF_ONE_WAY) ||
t->saved_priority > target_node->min_priority)
binder_set_nice(target_node->min_priority);
//从server发送数据给service_manager,cmd是BC_TRANSACTION
//从service_manager返回数据给server,将cmd设为BR_TRANSACTION,
cmd = BR_TRANSACTION;
} else {
trd->target.ptr = 0;
trd->cookie = 0;
cmd = BR_REPLY;
}
//构造binder_transaction_data
trd->code = t->code;
trd->flags = t->flags;
trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);
t_from = binder_get_txn_from(t);
if (t_from) {
struct task_struct *sender = t_from->proc->tsk;
trd->sender_pid =
task_tgid_nr_ns(sender,
task_active_pid_ns(current));
} else {
trd->sender_pid = 0;
}
ret = binder_apply_fd_fixups(proc, t);
if (ret) {
struct binder_buffer *buffer = t->buffer;
bool oneway = !!(t->flags & TF_ONE_WAY);
int tid = t->debug_id;
if (t_from)
binder_thread_dec_tmpref(t_from);
buffer->transaction = NULL;
binder_cleanup_transaction(t, "fd fixups failed",
BR_FAILED_REPLY);
binder_free_buf(proc, thread, buffer, true);
binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
"%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n",
proc->pid, thread->pid,
oneway ? "async " :
(cmd == BR_REPLY ? "reply " : ""),
tid, BR_FAILED_REPLY, ret, __LINE__);
if (cmd == BR_REPLY) {
cmd = BR_FAILED_REPLY;
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, cmd);
break;
}
continue;
}
trd->data_size = t->buffer->data_size;
trd->offsets_size = t->buffer->offsets_size;
trd->data.ptr.buffer = t->buffer->user_data;
trd->data.ptr.offsets = trd->data.ptr.buffer +
ALIGN(t->buffer->data_size,
sizeof(void *));
tr.secctx = t->security_ctx;
if (t->security_ctx) {
cmd = BR_TRANSACTION_SEC_CTX;
trsize = sizeof(tr);
}
// 把cmd拷贝给用户空间(service_manager)的buffer
if (put_user(cmd, (uint32_t __user *)ptr)) {
if (t_from)
binder_thread_dec_tmpref(t_from);
binder_cleanup_transaction(t, "put_user failed",
BR_FAILED_REPLY);
return -EFAULT;
}
ptr += sizeof(uint32_t);
//把tr数据拷贝给给用户空间(service_manager)的buffer
if (copy_to_user(ptr, &tr, trsize)) {
if (t_from)
binder_thread_dec_tmpref(t_from);
binder_cleanup_transaction(t, "copy_to_user failed",
BR_FAILED_REPLY);
return -EFAULT;
}
}
......
}
struct binder_transaction_data_secctx {
struct binder_transaction_data transaction_data;
binder_uintptr_t secctx;
};
/**
* struct binder_work - work enqueued on a worklist
* @entry: node enqueued on list
* @type: type of work to be performed
*
* There are separate work lists for proc, thread, and node (async).
*/
struct binder_work {
struct list_head entry;
enum binder_work_type {
BINDER_WORK_TRANSACTION = 1,
BINDER_WORK_TRANSACTION_COMPLETE,
BINDER_WORK_TRANSACTION_PENDING,
BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT,
BINDER_WORK_RETURN_ERROR,
BINDER_WORK_NODE,
BINDER_WORK_DEAD_BINDER,
BINDER_WORK_DEAD_BINDER_AND_CLEAR,
BINDER_WORK_CLEAR_DEATH_NOTIFICATION,
} type;
};
3.3 唤醒service_manager后,读取到的数据组织形式
由上面的分析可知,service_manager读取的数据形式如下:
3.4 继续分析service_manager读取到客户端数据后源码
- 解析读取到数据
void binder_loop(struct binder_state *bs, binder_handler func)
{
int res;
struct binder_write_read bwr;
uint32_t readbuf[32];
bwr.write_size = 0;
bwr.write_consumed = 0;
bwr.write_buffer = 0;
readbuf[0] = BC_ENTER_LOOPER;
binder_write(bs, readbuf, sizeof(uint32_t));
for (;;) {
bwr.read_size = sizeof(readbuf);
bwr.read_consumed = 0;
bwr.read_buffer = (uintptr_t) readbuf;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);//读到数据
if (res < 0) {
ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno));
break;
}
//解析读到的数据
res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
if (res == 0) {
ALOGE("binder_loop: unexpected reply?!\n");
break;
}
if (res < 0) {
ALOGE("binder_loop: io error %d %s\n", res, strerror(errno));
break;
}
}
}
int binder_parse(struct binder_state *bs, struct binder_io *bio,
uintptr_t ptr, size_t size, binder_handler func)
{
int r = 1;
uintptr_t end = ptr + (uintptr_t) size;
while (ptr < end) {
uint32_t cmd = *(uint32_t *) ptr;
ptr += sizeof(uint32_t);
#if TRACE
fprintf(stderr,"%s:\n", cmd_name(cmd));
#endif
switch(cmd) {
case BR_NOOP:
break;
......
//收到数据的处理情况,(收到的数据中有服务名称,服务的handle)
case BR_TRANSACTION: {
struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
if ((end - ptr) < sizeof(*txn)) {
ALOGE("parse: txn too small!\n");
return -1;
}
binder_dump_txn(txn);
if (func) {
unsigned rdata[256/4];
struct binder_io msg;
struct binder_io reply;
int res;
//构造binder_io
bio_init(&reply, rdata, sizeof(rdata), 4);
bio_init_from_txn(&msg, txn);
//处理binde_io
res = func(bs, txn, &msg, &reply); // func = svcmgr_handler,用于添加/获取服务
//将处理完的数据,发送给server
binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
}
ptr += sizeof(*txn);
break;
}
......
default:
ALOGE("parse: OOPS %d\n", cmd);
return -1;
}
}
return r;
}
- 获取服务名,服务引用
int svcmgr_handler(struct binder_state *bs,
struct binder_transaction_data *txn,
struct binder_io *msg,
struct binder_io *reply)
{
struct svcinfo *si;
uint16_t *s;
size_t len;
uint32_t handle;
uint32_t strict_policy;
int allow_isolated;
//ALOGI("target=%x code=%d pid=%d uid=%d\n",
// txn->target.handle, txn->code, txn->sender_pid, txn->sender_euid);
if (txn->target.handle != svcmgr_handle)
return -1;
if (txn->code == PING_TRANSACTION)
return 0;
// Equivalent to Parcel::enforceInterface(), reading the RPC
// header with the strict mode policy mask and the interface name.
// Note that we ignore the strict_policy and don't propagate it
// further (since we do no outbound RPCs anyway).
strict_policy = bio_get_uint32(msg);
s = bio_get_string16(msg, &len); //传入的是android.os.IServiceManager
if (s == NULL) {
return -1;
}
if ((len != (sizeof(svcmgr_id) / 2)) ||
memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {//传入的必须是android.os.IServiceManager
fprintf(stderr,"invalid id %s\n", str8(s, len));
return -1;
}
switch(txn->code) {
case SVC_MGR_GET_SERVICE:
case SVC_MGR_CHECK_SERVICE:
s = bio_get_string16(msg, &len);
if (s == NULL) {
return -1;
}
handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);
if (!handle)
break;
bio_put_ref(reply, handle);
return 0;
case SVC_MGR_ADD_SERVICE:
s = bio_get_string16(msg, &len);//获得服务名hello
if (s == NULL) {
return -1;
}
handle = bio_get_ref(msg);//获得服务的引用handle
allow_isolated = bio_get_uint32(msg) ? 1 : 0;
//添加服务
if (do_add_service(bs, s, len, handle, txn->sender_euid,
allow_isolated, txn->sender_pid))
return -1;
break;
case SVC_MGR_LIST_SERVICES: {
uint32_t n = bio_get_uint32(msg);
if (!svc_can_list(txn->sender_pid)) {
ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
txn->sender_euid);
return -1;
}
si = svclist;
while ((n-- > 0) && si)
si = si->next;
if (si) {
bio_put_string16(reply, si->name);
return 0;
}
return -1;
}
default:
ALOGE("unknown code %d\n", txn->code);
return -1;
}
bio_put_uint32(reply, 0);//处理完后,最后要构造一个reply,并放入0
return 0;
}
- 添加服务
int do_add_service(struct binder_state *bs,
const uint16_t *s, size_t len,
uint32_t handle, uid_t uid, int allow_isolated,
pid_t spid)
{
struct svcinfo *si;
//ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,
// allow_isolated ? "allow_isolated" : "!allow_isolated", uid);
if (!handle || (len == 0) || (len > 127))
return -1;
if (!svc_can_register(s, len, spid)) {
ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
str8(s, len), handle, uid);
return -1;
}
si = find_svc(s, len);
if (si) {
if (si->handle) {
ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
str8(s, len), handle, uid);
svcinfo_death(bs, si);
}
si->handle = handle;
} else {
//如果链表中查不到svcinfo,则构造svcinfo
si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
if (!si) {
ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
str8(s, len), handle, uid);
return -1;
}
si->handle = handle;
si->len = len;
memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
si->name[len] = '\0';
si->death.func = (void*) svcinfo_death;
si->death.ptr = si;
si->allow_isolated = allow_isolated;
// 将其放在链表中
si->next = svclist;
svclist = si;
}
ALOGI("add_service('%s'), handle = %d\n", str8(s, len), handle);
binder_acquire(bs, handle);//增加引用计数
binder_link_to_death(bs, handle, &si->death);
return 0;
}
struct svcinfo
{
struct svcinfo *next;
uint32_t handle;
struct binder_death death;
int allow_isolated;
size_t len;
uint16_t name[0];
};
void binder_acquire(struct binder_state *bs, uint32_t target)
{
uint32_t cmd[2];
cmd[0] = BC_ACQUIRE;
cmd[1] = target;
binder_write(bs, cmd, sizeof(cmd));
}
void binder_link_to_death(struct binder_state *bs, uint32_t target, struct binder_death *death)
{
struct {
uint32_t cmd;
struct binder_handle_cookie payload;
} __attribute__((packed)) data;
data.cmd = BC_REQUEST_DEATH_NOTIFICATION;
data.payload.handle = target;
data.payload.cookie = (uintptr_t) death;
binder_write(bs, &data, sizeof(data));//提供服务者death后,应该通知service_manager
}
int binder_write(struct binder_state *bs, void *data, size_t len)
{
struct binder_write_read bwr;
int res;
bwr.write_size = len;
bwr.write_consumed = 0;
bwr.write_buffer = (uintptr_t) data;
bwr.read_size = 0;
bwr.read_consumed = 0;
bwr.read_buffer = 0;
res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
if (res < 0) {
fprintf(stderr,"binder_write: ioctl failed (%s)\n",
strerror(errno));
}
return res;
}
- 将处理完的数据,发送给server
void binder_send_reply(struct binder_state *bs,
struct binder_io *reply,
binder_uintptr_t buffer_to_free,
int status)
{
struct {
uint32_t cmd_free;
binder_uintptr_t buffer;
uint32_t cmd_reply;
struct binder_transaction_data txn;
} __attribute__((packed)) data;
data.cmd_free = BC_FREE_BUFFER;//server拷贝到service_manager映射的内核态缓冲区的数据,用完后,就可以释放了
data.buffer = buffer_to_free;
data.cmd_reply = BC_REPLY; //发送service_manager处理函数svcmgr_handler,最后构造的reply,reply的值为0
data.txn.target.ptr = 0;
data.txn.cookie = 0;
data.txn.code = 0;
if (status) {
data.txn.flags = TF_STATUS_CODE;
data.txn.data_size = sizeof(int);
data.txn.offsets_size = 0;
data.txn.data.ptr.buffer = (uintptr_t)&status;
data.txn.data.ptr.offsets = 0;
} else {
data.txn.flags = 0;
data.txn.data_size = reply->data - reply->data0;
data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
}
binder_write(bs, &data, sizeof(data));
}