系列文章目录
可跳转到下面链接查看下表所有内容https://blog.csdn.net/handsomethefirst/article/details/138226266?spm=1001.2014.3001.5501文章浏览阅读2次。系列文章大全https://blog.csdn.net/handsomethefirst/article/details/138226266?spm=1001.2014.3001.5501
目录
系列文章目录
1.简介
1.1 流程介绍
1.2 时序图
2.源码分析
2.1 servicemanager的启动
2.2 servicemanager.rc
2.3 main.cpp
2.4 ProcessState::initWithDriver
2.5 ProcessState::init
2.6 ProcessState::ProcessState
2.7 open_driver
2.8 binder_open
2.9 获取版本号binder_ioctl
2.10 设置binder最大线程数binder_ioctl
2.11 binder_mmap
2.12 binder_update_page_range
2.13 setThreadPoolMaxThreadCount
2.14 ServiceManager构造函数
2.15 ServiceManager::addService
2.16 getCallingContext
2.17 IPCThreadState::self
2.18 IPCThreadState::IPCThreadState
2.19 Access::canAdd
2.20 IPCThreadState::setTheContextObject
2.21 ProcessState::becomeContextManager
2.22 binder_ioctl
2.23 binder_new_node
2.24 Looper::prepare
2.25 Looper::getForThread
2.26 Looper::initTLSKey
2.27 Looper::Looper
2.28 Looper::rebuildEpollLocked
2.29 Looper::setForThread
2.30 sp setupTo
2.31 IPCThreadState::setupPolling
2.32 Parcel::writeInt32
2.33 IPCThreadState::flushCommands
2.34 IPCThreadState::talkWithDriver
2.35 binder_ioctl
2.36 binder_thread_write
2.37 Looper::addFd
2.38 ClientCallbackCallback::setupTo
2.39 pollAll
2.40 pollOnce
2.41 pollInner
1.简介
经过上一篇,我们已经知道servermanager的框架,那么本篇便从源码的角度分析servermangaer服务的启动流程。
1.1 流程介绍
第一步:首先init进程启动后会读取各个进程的rc启动文件,然后会去启动servermanager服务。
第二步:在servermanager服务的main函数中,首先会打开/dev/binder驱动,并申请一块内存,通过mmap的进行内存映射,将servermanager服务的用户空间映射到驱动中,从而会减少数据的拷贝。
第三步:在servermanager服务的main函数中,创建servicemanager服务的对象,并将ServiceManager对象注册到自身的Map表保存起来。
第四步:becomeContextManager通知驱动成为binder的管理者。
第五步:servermanager服务的主线程进入死循环,调用epoll_wait() 监听/dev/binder是否有数据可读,如有则调用回调执行指令,根据client传入的code,从而调用get()\add()\ 等接口,来查询、注册binder服务。
1.2 时序图
(为了保证流程的完整性,较为模糊,可放大观看)
2.源码分析
2.1 servicemanager的启动
1.在init.rc中,当init进程启动后,会去启动servicemanager、hwservicemanager、vndservicemanager 三个binder的守护进程。
[/system/core/rootdir/init.rc]
on init
...
start servicemanager //启动sericemanager服务
start hwservicemanager
start vndservicemanager
2.2 servicemanager.rc
1.init进程启动后,会从指定路径加载进程的rc文件,然后启动servicemanager服务。
service servicemanager /system/bin/servicemanager
class core animation //服务的类为core和animation
user system //在启动服务前将用户切换为system
group system readproc //在启动前将用户组切换为system和readproc
critical //表明这个服务是至关重要的服务,如果它在四分钟内退出超过四次,则设备将重启进入恢复模式
onrestart restart apexd //当此服务重启后,重启apexd服务
onrestart restart audioserver //当此服务重启后,重启audioserver服务
onrestart restart gatekeeperd //当此服务重启后,重启gatekeeperd服务
onrestart class_restart main //当此服务重启后,重启所有class为main的服务
onrestart class_restart hal //当此服务重启后,重启所有class为 hal的服务
onrestart class_restart early_hal //当此服务重启后,重启所有class为early_hal的服务
writepid /dev/cpuset/system-background/tasks //写入当前servicemanager进程的pid到/dev/cpuset/system-background/tasks文件中
shutdown critical //设置Service进程的关闭行为。在关机期间,当前服务在shutdown超时之前不会被关闭。
2.3 main.cpp
主要作用为:
1.打开/dev/binder驱动,并完成mmap的内存映射,然后向设置最大线程数。
2.创建servicemanager服务的对象。
3.将ServiceManager对象注册到自身的Map表中。
4.becomeContextManager通知驱动成为binder的管理者。
5.进入死循环,调用epoll_wait() 监听/dev/binder是否有数据可读,如有则调用回调执行指令,根据client传入的code,从而调用get()\add()\ 等接口,来查询、注册binder服务。
int main(int argc, char** argv) {//servicemanager入口,与/dev/binder交互,vndservicemanager也是走这个口,只是init时,会传入/dev/vndbinder值
if (argc > 2) {
LOG(FATAL) << "usage: " << argv[0] << " [binder driver]";
}
const char* driver = argc == 2 ? argv[1] : "/dev/binder";//因为没传入值所以driver值是/dev/binder
sp<ProcessState> ps = ProcessState::initWithDriver(driver);//传入的值是/dev/binder,创建了ProcessState对象,里面主要是通过mmap内存映射一块地址空间
ps->setThreadPoolMaxThreadCount(0);//设置最大线程数为0
ps->setCallRestriction(ProcessState::CallRestriction::FATAL_IF_NOT_ONEWAY);
sp<ServiceManager> manager = sp<ServiceManager>::make(std::make_unique<Access>());//创建了一个ServiceManager对象
if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {//将自身注册到ServiceManager当中
LOG(ERROR) << "Could not self register servicemanager";
}
IPCThreadState::self()->setTheContextObject(manager);
ps->becomeContextManager();//成为管理者
sp<Looper> looper = Looper::prepare(false /*allowNonCallbacks*/);
BinderCallback::setupTo(looper);
ClientCallbackCallback::setupTo(looper, manager);
while(true) {//死循环,
looper->pollAll(-1);//阻塞在此处,等待监听的fd有消息的到来
}
// should not be reached
return EXIT_FAILURE;
}
2.4 ProcessState::initWithDriver
sp<ProcessState> ProcessState::initWithDriver(const char* driver)
{
return init(driver, true /*requireDefault*/);
}
2.5 ProcessState::init
1.创建了ProcessState对象。此对象用于打开binder驱动,并完成内存的映射,将用户空间的一块内存映射到内核空间,映射关系建立后,用户对这块内存区域的修改可直接反映到内核空间,反之内核空间对其修改也能反映到用户空间。
sp<ProcessState> ProcessState::init(const char *driver, bool requireDefault)
{
[[clang::no_destroy]] static sp<ProcessState> gProcess;
[[clang::no_destroy]] static std::mutex gProcessMutex;
/**
if (driver == nullptr) {//此时不为空,不执行
std::lock_guard<std::mutex> l(gProcessMutex);
return gProcess;
}
*/
[[clang::no_destroy]] static std::once_flag gProcessOnce;//call_once的flag
std::call_once(gProcessOnce, [&](){//call_once表示这个函数只执行一次
if (access(driver, R_OK) == -1) {//access判断/dev/binder文件是否存在
ALOGE("Binder driver %s is unavailable. Using /dev/binder instead.", driver);
driver = "/dev/binder";
}
std::lock_guard<std::mutex> l(gProcessMutex);
gProcess = sp<ProcessState>::make(driver);//此处使用的是安卓智能指针的构造,内部会调用new ProcessState
});
if (requireDefault) {//传入的是true,打印log
LOG_ALWAYS_FATAL_IF(gProcess->getDriverName() != driver,
"ProcessState was already initialized with %s,"
" can't initialize with %s.",
gProcess->getDriverName().c_str(), driver);
}
return gProcess;
}
2.6 ProcessState::ProcessState
ProcessState::ProcessState(const char *driver)
: mDriverName(String8(driver))//driver是/dev/binder
, mDriverFD(open_driver(driver))//打开binder驱动返回的fd文件描述符
, mVMStart(MAP_FAILED)//
, mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)//保护线程数量的锁,pthread_mutex_t mThreadCountLock;
, mThreadCountDecrement(PTHREAD_COND_INITIALIZER)//条件变量。pthread_cond_t mThreadCountDecrement;
, mExecutingThreadsCount(0)
, mWaitingForThreads(0)
, mMaxThreads(DEFAULT_MAX_BINDER_THREADS)//最大线程数15
, mStarvationStartTimeMs(0)
, mThreadPoolStarted(false)
, mThreadPoolSeq(1)
, mCallRestriction(CallRestriction::NONE)
{
if (mDriverFD >= 0) {
// mmap binder,内存映射。128k的内存地址,申请128k的内存地址用来接收事务,对应binder驱动的binder_mmap函数。
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
/**
if (mVMStart == MAP_FAILED) {//如果内存映射失败
// *sigh*
ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str());
close(mDriverFD);
mDriverFD = -1;
mDriverName.clear();
}
*/
}
}
2.7 open_driver
主要作用为:
1.open打开binder驱动,通过系统调用_open,最后会调用到binder_open,陷入内核态。获取fd。
2.通过ioctl,获取binder驱动版本号,用vers保存。然后对比版本号是否一致。
3.通过ioctl设置binder驱动最大线程数。
static int open_driver(const char *driver)
{
int fd = open(driver, O_RDWR | O_CLOEXEC);//driver值是/dev/binder,打开binder驱动,陷入内核,此
//open对应_open,然后对应binder驱动层的binder_open,binder_open中主要是为/dev/binder这个驱动设备节点,创建
//对应的proc对象,然后通过fd返回。这样就可以和驱动进行通信。
if (fd >= 0) {//大于0代表打开成功
int vers = 0;
status_t result = ioctl(fd, BINDER_VERSION, &vers);//获取binder驱动的版本信息,用vers保存
/**正常不执行
if (result == -1) {//如果获取信息失败,则关闭binder驱动
ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
close(fd);
fd = -1;
}
*/
/**
if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {//如果获取的版本信息不匹配,则关闭binder驱动
ALOGE("Binder driver protocol(%d) does not match user space protocol(%d)! ioctl() return value: %d",
vers, BINDER_CURRENT_PROTOCOL_VERSION, result);
close(fd);
fd = -1;
}
*/
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;//设置最大线程数15
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);//设置binder驱动最大线程数
/**
if (result == -1) {
ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
}
*/
uint32_t enable = DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION;//DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION值是1,表示默认启用更新垃圾检测
result = ioctl(fd, BINDER_ENABLE_ONEWAY_SPAM_DETECTION, &enable);//设置到binder驱动中
/**
if (result == -1) {
ALOGD("Binder ioctl to enable oneway spam detection failed: %s", strerror(errno));
}
*/
}
/**
else {
ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno));
}
*/
return fd;
}
2.8 binder_open
1.为当前sm服务进程分配保存binder_proc的对象。binder_proc结构体保存的是sm服务的进程的信息。
2.对sm服务的binder_proc的对象进行初始化,如初始化todo队列,设置进程优先级等。
3.将sm服务的binder_proc添加到binder_procs队列中,驱动有一个全局的binder_procss的列表,用于存储所有进程的binder_proc对象。
//主要作用:binder驱动为用户进程创建了用户进程自己的binder——proc实体
static int binder_open(struct inode *nodp, struct file *filp)
{
struct binder_proc *proc;//proc表示驱动中binder进程
proc = kzalloc(sizeof(*proc), GFP_KERNEL);//为binder_proc结构体在分配kernel内存空间
if (proc == NULL)
return -ENOMEM;
//下面是对binder_proc进行初始化,binder_proc用于管理数据的记录体
get_task_struct(current);
proc->tsk = current;//current代表当前线程。当前线程此时是servicemanager的主线程,当前线程的task保存在binder进程的tsk
INIT_LIST_HEAD(&proc->todo);//初始化to列表
init_waitqueue_head(&proc->wait);//初始化wait列表
proc->default_priority = task_nice(current);//当前进程的nice值转化为进程优先级
binder_lock(__func__);
binder_stats_created(BINDER_STAT_PROC);//BINDER_PROC对象创建+1
hlist_add_head(&proc->proc_node, &binder_procs);//将proc_node节点添加到binder_procs为表头的队列
proc->pid = current->group_leader->pid;
INIT_LIST_HEAD(&proc->delivered_death);
filp->private_data = proc;//将 proc 保存到filp中,这样下次可以通过filp找到此proc
binder_unlock(__func__);
if (binder_debugfs_dir_entry_proc) {
char strbuf[11];
snprintf(strbuf, sizeof(strbuf), "%u", proc->pid);
proc->debugfs_entry = debugfs_create_file(strbuf, S_IRUGO,
binder_debugfs_dir_entry_proc, proc, &binder_proc_fops);
}
return 0;
}
//进程相关的参数
struct binder_proc {
struct hlist_node proc_node;//hlist_node表示哈希表中的一个节点。哈希表中的一个节点,用于标记该进程
struct rb_root threads;// rb_root表示红黑树。Binder线程池每一个Binder进程都有一个线程池,
//由Binder驱动来维护,Binder线程池中所有线程由一个红黑树来组织,RB树以线程ID为关键字 */
//上述红黑树的根节点
struct rb_root nodes;// 一系列Binder实体对象(binder_node)和Binder引用对象(binder_ref) */
//在用户空间:运行在Server端称为Binder本地对象,运行在Client端称为Binder代理对象*/
//在内核空间:Binder实体对象用来描述Binder本地对象,Binder引用对象来描述Binder代理对象 */
//Binder实体对象列表(RB树),关键字 ptr
struct rb_root refs_by_desc;//记录binder引用, 便于快速查找,binder_ref红黑树的根节点(以handle为key),它是Client在Binder驱动中的体现。
//Binder引用对象,关键字 desc
struct rb_root refs_by_node;//记录binder引用, 便于快速查找,binder_ref红黑树的根节点(以ptr为key),它是Client在Binder驱动中的体现,
//Binder引用对象,关键字 node
struct list_head waiting_threads;
int pid;//进程组pid
struct task_struct *tsk;//任务控制模块
const struct cred *cred;
struct hlist_node deferred_work_node;
int deferred_work;
bool is_dead;
struct list_head todo;//待处理队列,进程每接收到一个通信请求,Binder将其封装成一个工作项,保存在待处理队列to_do中 */
struct binder_stats stats;
struct list_head delivered_death;
int max_threads;// Binder驱动程序最多可以请求进程注册线程的最大数量,
//进程可以调用ioctl注册线程到Binder驱动程序中,当线程池中没有足够空闲线程来处理事务时,Binder驱动可以主动要求进程注册更多的线程到Binder线程池中 */
// Binder驱动程序最多可以请求进程注册线程的最大数量
int requested_threads;
int requested_threads_started;
int tmp_ref;
long default_priority;
struct dentry *debugfs_entry;
struct binder_alloc alloc;
struct binder_context *context;//存储binder_node和binder_context_mgr_uid以及name
spinlock_t inner_lock;
spinlock_t outer_lock;
struct dentry *binderfs_entry;
};
2.9 获取版本号binder_ioctl
主要作用为:
1.此时用于获取版本号,存在一次拷贝行为。
//获取binder版本号分析
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;//binder线程
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
trace_binder_ioctl(cmd, arg);
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//binder_stop_on_user_error值是0,
//binder_stop_on_user_error < 2为假时,休眠,故此处不休眠
if (ret)
goto err_unlocked;
binder_lock(__func__);
thread = binder_get_thread(proc);//获取当前sm的proc实体对象的binder_thread
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_VERSION://获取版本号
if (size != sizeof(struct binder_version)) {
ret = -EINVAL;
goto err;
}
if (put_user(BINDER_CURRENT_PROTOCOL_VERSION, &((struct binder_version *)ubuf)->protocol_version)) {
//将驱动程序的BINDER_CURRENT_PROTOCOL_VERSION变量地址,拷贝sizeof(*ptr)的ubuf用户空间地址。
//分析,此处是有一次拷贝的。
ret = -EINVAL;
goto err;
}
break;
default:
ret = -EINVAL;
goto err;
}
ret = 0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
binder_unlock(__func__);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
if (ret && ret != -ERESTARTSYS)
printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
trace_binder_ioctl_done(ret);
return ret;
}
2.10 设置binder最大线程数binder_ioctl
//设置binder最大线程数分析
//ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);maxThreads值为15
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;//binder线程
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;//ubuf是15的地址
trace_binder_ioctl(cmd, arg);
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
if (ret)
goto err_unlocked;
binder_lock(__func__);
thread = binder_get_thread(proc);//获取当前sm的proc实体的binder_thread
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_SET_MAX_THREADS://设置binder最大线程数量
if (copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads))) {//拷贝用户空间的地址到内核空间
ret = -EINVAL;
goto err;
}
break;
ret = 0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
binder_unlock(__func__);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret && ret != -ERESTARTSYS)
printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
trace_binder_ioctl_done(ret);
return ret;
}
2.11 binder_mmap
主要作用为:
1.get_vm_area分配一个连续的内核虚拟空间,与用户进程虚拟空间大小一致。
2.binder_update_page_range,分配物理页面,同时映射到内核空间和用户进程空间。
即,此时servermanager服务在其内部分配了一块地址映射进入了内核空间。减少了拷贝。
//filp对应fp,vma代表用户地址空间
static int binder_mmap(struct file *filp, struct vm_area_struct *vma)//vma描述了应用程序使用的用户虚拟空间
{
int ret;
struct vm_struct *area;//内核虚拟空间
struct binder_proc *proc = filp->private_data;//取出sm进程对应的binder_proc对象
const char *failure_string;
struct binder_buffer *buffer;//用于binder传输数据的缓冲区
if ((vma->vm_end - vma->vm_start) > SZ_4M)//保证映射内存大小不超过4M
vma->vm_end = vma->vm_start + SZ_4M;
if (vma->vm_flags & FORBIDDEN_MMAP_FLAGS) {//判断是否禁止mmap
ret = -EPERM;
failure_string = "bad vm_flags";
goto err_bad_arg;
}
vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE;
mutex_lock(&binder_mmap_lock);
if (proc->buffer) {//如果buffer不为空,则代表已经mmap过了
ret = -EBUSY;
failure_string = "already mapped";
goto err_already_mapped;
}
area = get_vm_area(vma->vm_end - vma->vm_start, VM_IOREMAP);//分配一个连续的内核虚拟空间,与用户进程虚拟空间大小一致
if (area == NULL) {
ret = -ENOMEM;
failure_string = "get_vm_area";
goto err_get_vm_area_failed;
}
proc->buffer = area->addr;//binder_proc对象的buffer指向内核虚拟空间的地址
proc->user_buffer_offset = vma->vm_start - (uintptr_t)proc->buffer;//计算偏移量,地址偏移量=用户空间地址-内核空间地址
mutex_unlock(&binder_mmap_lock);
proc->pages = kzalloc(sizeof(proc->pages[0]) * ((vma->vm_end - vma->vm_start) / PAGE_SIZE), GFP_KERNEL);//分配
//分配物理页的指针数组,大小等于用户虚拟内存/4K,此数组用于指示binder申请的物理页的状态
if (proc->pages == NULL) {
ret = -ENOMEM;
failure_string = "alloc page array";
goto err_alloc_pages_failed;
}
proc->buffer_size = vma->vm_end - vma->vm_start;//计算大小
vma->vm_ops = &binder_vm_ops;
vma->vm_private_data = proc;
if (binder_update_page_range(proc, 1, proc->buffer, proc->buffer + PAGE_SIZE, vma)) {//分配物理页面,同时映射到内核空间和进程空间,
//目前只分配1个page的物理页
ret = -ENOMEM;
failure_string = "alloc small buf";
goto err_alloc_small_buf_failed;
}
buffer = proc->buffer;//binder_buffer对象,指向proc的buffer地址,proc的buffer地址又指向内核虚拟空间的地址
INIT_LIST_HEAD(&proc->buffers);
list_add(&buffer->entry, &proc->buffers);//将内核虚拟空间的地址加入到所属进程的buffer队列
buffer->free = 1;
binder_insert_free_buffer(proc, buffer);//将空闲的buffer放入proc->free_buffer中
proc->free_async_space = proc->buffer_size / 2;
barrier();
proc->files = get_files_struct(proc->tsk);
proc->vma = vma;//binder_proc对象保存了用户空间的地址
proc->vma_vm_mm = vma->vm_mm;
/*printk(KERN_INFO "binder_mmap: %d %lx-%lx maps %p\n",
proc->pid, vma->vm_start, vma->vm_end, proc->buffer);*/
return 0;
错误跳转
err_alloc_small_buf_failed:
kfree(proc->pages);
proc->pages = NULL;
err_alloc_pages_failed:
mutex_lock(&binder_mmap_lock);
vfree(proc->buffer);
proc->buffer = NULL;
err_get_vm_area_failed:
err_already_mapped:
mutex_unlock(&binder_mmap_lock);
err_bad_arg:
printk(KERN_ERR "binder_mmap: %d %lx-%lx %s failed %d\n",
proc->pid, vma->vm_start, vma->vm_end, failure_string, ret);
return ret;
}
2.12 binder_update_page_range
主要作用为:
1.分配物理内存空间。
2.将物理空间映射到内核空间。
3.将物理空间映射到用户进程空间。
当然binder_update_page_range 既可以分配物理页面,也可以释放物理页面。
//binder_update_page_range 主要完成工作:分配物理内存空间,将物理空间映射到内核空间,将物理空间映射到进程空间。
//当然binder_update_page_range 既可以分配物理页面,也可以释放物理页面
static int binder_update_page_range(struct binder_proc *proc, int allocate,
void *start, void *end,
struct vm_area_struct *vma)
//参数说明:
//proc对应的进程的proc对象
//allocate表示是申请还是释放
//start,表示内核虚拟空间起始地址
//end,内核虚拟空间末尾地址
//用户空间地址
{
void *page_addr;
unsigned long user_page_addr;
struct vm_struct tmp_area;
struct page **page;
struct mm_struct *mm;
if (end <= start)
return 0;
trace_binder_update_page_range(proc, allocate, start, end);
if (vma)
mm = NULL;
else
mm = get_task_mm(proc->tsk);
if (mm) {
down_write(&mm->mmap_sem);
vma = proc->vma;
if (vma && mm != proc->vma_vm_mm) {
pr_err("binder: %d: vma mm and task mm mismatch\n",
proc->pid);
vma = NULL;
}
}
if (allocate == 0)
goto free_range;
if (vma == NULL) {
printk(KERN_ERR "binder: %d: binder_alloc_buf failed to "
"map pages in userspace, no vma\n", proc->pid);
goto err_no_vma;
}
//前面都在判断参数是否合法
//
for (page_addr = start; page_addr < end; page_addr += PAGE_SIZE) {
int ret;
struct page **page_array_ptr;
page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
BUG_ON(*page);
//分配物理内存
*page = alloc_page(GFP_KERNEL | __GFP_HIGHMEM | __GFP_ZERO);
if (*page == NULL) {
printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
"for page at %p\n", proc->pid, page_addr);
goto err_alloc_page_failed;
}
tmp_area.addr = page_addr;//tmp_area是内核地址,此时指向物理地址
tmp_area.size = PAGE_SIZE + PAGE_SIZE /* guard page? */;
page_array_ptr = page;
ret = map_vm_area(&tmp_area, PAGE_KERNEL, &page_array_ptr);//映射物理页地址到内核地址tmp_area
if (ret) {
printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
"to map page at %p in kernel\n",
proc->pid, page_addr);
goto err_map_kernel_failed;
}
user_page_addr =
(uintptr_t)page_addr + proc->user_buffer_offset;
ret = vm_insert_page(vma, user_page_addr, page[0]);//映射物理页地址到虚拟用户地址空间vma
if (ret) {
printk(KERN_ERR "binder: %d: binder_alloc_buf failed "
"to map page at %lx in userspace\n",
proc->pid, user_page_addr);
goto err_vm_insert_page_failed;
}
/* vm_insert_page does not seem to increment the refcount */
}
if (mm) {
up_write(&mm->mmap_sem);
mmput(mm);
}
return 0;
free_range:
for (page_addr = end - PAGE_SIZE; page_addr >= start;
page_addr -= PAGE_SIZE) {
page = &proc->pages[(page_addr - proc->buffer) / PAGE_SIZE];
if (vma)
zap_page_range(vma, (uintptr_t)page_addr +
proc->user_buffer_offset, PAGE_SIZE, NULL);
err_vm_insert_page_failed:
unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE);
err_map_kernel_failed:
__free_page(*page);
*page = NULL;
err_alloc_page_failed:
;
}
err_no_vma:
if (mm) {
up_write(&mm->mmap_sem);
mmput(mm);
}
return -ENOMEM;
}
2.13 setThreadPoolMaxThreadCount
1.设置最大线程数。
status_t ProcessState::setThreadPoolMaxThreadCount(size_t maxThreads) {//传入的是0
LOG_ALWAYS_FATAL_IF(mThreadPoolStarted && maxThreads < mMaxThreads,
"Binder threadpool cannot be shrunk after starting");
status_t result = NO_ERROR;
if (ioctl(mDriverFD, BINDER_SET_MAX_THREADS, &maxThreads) != -1) {//设置最大线程数为0
mMaxThreads = maxThreads;
} else {
result = -errno;
ALOGE("Binder ioctl to set max threads failed: %s", strerror(-result));
}
return result;
}
2.14 ServiceManager构造函数
ServiceManager::ServiceManager(std::unique_ptr<Access>&& access) : mAccess(std::move(access)) {
}
//负责权限控制
Access::Access() {
union selinux_callback cb;//selinux相关
cb.func_audit = auditCallback;
selinux_set_callback(SELINUX_CB_AUDIT, cb);
cb.func_log = kIsVendor ? selinux_vendor_log_callback : selinux_log_callback;
selinux_set_callback(SELINUX_CB_LOG, cb);
CHECK(selinux_status_open(true /*fallback*/) >= 0);
CHECK(getcon(&mThisProcessContext) == 0);
}
2.15 ServiceManager::addService
主要作用为:
1.将自身ServiceManager对象添加到一个map中保存。此map的key是服务的name,Value则是封装了对应的binder对象的service类。
//ServiceManager继承自BnServiceManager
"manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()
Status ServiceManager::addService(const std::string& name, const sp<IBinder>& binder, bool allowIsolated, int32_t dumpPriority) {
//传入的参数
//name是manager
//binder是ServiceManager类对象,ServiceManager继承自BnServiceManager
//allowIsolated是false
auto ctx = mAccess->getCallingContext();
/**
// app不能添加到service中
if (multiuser_get_app_id(ctx.uid) >= AID_APP) {
return Status::fromExceptionCode(Status::EX_SECURITY);
}
if (!mAccess->canAdd(ctx, name)) {//检查se权限
return Status::fromExceptionCode(Status::EX_SECURITY);
}
if (binder == nullptr) {
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
if (!isValidServiceName(name)) {
LOG(ERROR) << "Invalid service name: " << name;
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
*/
#ifndef VENDORSERVICEMANAGER
if (!meetsDeclarationRequirements(binder, name)) {
// already logged
return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
}
#endif // !VENDORSERVICEMANAGER
//检查是否有死亡链接
if (binder->remoteBinder() != nullptr &&
binder->linkToDeath(sp<ServiceManager>::fromExisting(this)) != OK) {
LOG(ERROR) << "Could not linkToDeath when adding " << name;
return Status::fromExceptionCode(Status::EX_ILLEGAL_STATE);
}
// 覆盖旧服务(如果存在)
//mNameToService是一个ServiceMap,using ServiceMap = std::map<std::string, Service>;
// struct Service {
// sp<IBinder> binder; // not null
// bool allowIsolated;
// int32_t dumpPriority;
// bool hasClients = false; // notifications sent on true -> false.
// bool guaranteeClient = false; // forces the client check to true
// pid_t debugPid = 0; // the process in which this service runs
// the number of clients of the service, including servicemanager itself
// ssize_t getNodeStrongRefCount();
//};
mNameToService[name] = Service {//保存此service,name是manager
.binder = binder,//此时是ServiceManager,对象。
.allowIsolated = allowIsolated
.dumpPriority = dumpPriority,
.debugPid = ctx.debugPid,
};
auto it = mNameToRegistrationCallback.find(name);//mNameToRegistrationCallback是一个ServiceCallbackMap
//using ServiceCallbackMap = std::map<std::string, std::vector<sp<IServiceCallback>>>;
/**
if (it != mNameToRegistrationCallback.end()) {
for (const sp<IServiceCallback>& cb : it->second) {
mNameToService[name].guaranteeClient = true;
// permission checked in registerForNotifications
cb->onRegistration(name, binder);
}
}
*/
return Status::ok();
}
然后我们再看一下其类的结构图。
2.16 getCallingContext
Access::CallingContext Access::getCallingContext() {
IPCThreadState* ipc = IPCThreadState::self();//创建了IPCThreadState对象
const char* callingSid = ipc->getCallingSid();//获取了sid
pid_t callingPid = ipc->getCallingPid();//获取了pid
return CallingContext {
.debugPid = callingPid,
.uid = ipc->getCallingUid(),
.sid = callingSid ? std::string(callingSid) : getPidcon(callingPid),
};//返回了一个保存sid,uid的结构体
}
2.17 IPCThreadState::self
IPCThreadState* IPCThreadState::self()
{
/**不执行
if (gHaveTLS.load(std::memory_order_acquire)) {//定义时是false。static std::atomic<bool> gHaveTLS(false);
restart:
const pthread_key_t k = gTLS;
IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
if (st) return st;
return new IPCThreadState;
}
// Racey, heuristic test for simultaneous shutdown.
if (gShutdown.load(std::memory_order_relaxed)) {//定义是false。static std::atomic<bool> gShutdown = false;
ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
return nullptr;
}
*/
pthread_mutex_lock(&gTLSMutex);
if (!gHaveTLS.load(std::memory_order_relaxed)) {
int key_create_value = pthread_key_create(&gTLS, threadDestructor);//分配用于表示进程中线程特定数据的键,键对进程中的所有线程来说是全局的.
//pthread_key_create第一个参数为指向一个键值的指针,第二个参数指明了一个destructor函数,
//如果这个参数不为空,那么当每个线程结束时,系统将调用这个函数来释放绑定在这个键上的内存块。
//key_create_value成功返回0,其他是失败
/**
if (key_create_value != 0) {
pthread_mutex_unlock(&gTLSMutex);
ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
strerror(key_create_value));
return nullptr;
}
*/
gHaveTLS.store(true, std::memory_order_release);//将gHaveTLS值设置为true。static std::atomic<bool> gHaveTLS(false);
}
pthread_mutex_unlock(&gTLSMutex);
goto restart;
}
2.18 IPCThreadState::IPCThreadState
IPCThreadState::IPCThreadState()
: mProcess(ProcessState::self()),//保存当前的ProcessState对象
mServingStackPointer(nullptr),
mWorkSource(kUnsetWorkSource),
mPropagateWorkSource(false),
mIsLooper(false),
mIsFlushing(false),
mStrictModePolicy(0),
mLastTransactionBinderFlags(0),
mCallRestriction(mProcess->mCallRestriction) {
pthread_setspecific(gTLS, this);//将自己即IPCThreadState对象存储到gTLS中。
clearCaller();
mIn.setDataCapacity(256);//设置输入缓存区大小
mOut.setDataCapacity(256);//设置输出缓存区大小
}
2.19 Access::canAdd
主要作用为:
1.检查是否有向servermanager添加的服务权限。
bool Access::canAdd(const CallingContext& ctx, const std::string& name) {
return actionAllowedFromLookup(ctx, name, "add");
}
bool Access::actionAllowedFromLookup(const CallingContext& sctx, const std::string& name, const char *perm) {
char *tctx = nullptr;
if (selabel_lookup(getSehandle(), &tctx, name.c_str(), SELABEL_CTX_ANDROID_SERVICE) != 0) {
LOG(ERROR) << "SELinux: No match for " << name << " in service_contexts.\n";
return false;
}
bool allowed = actionAllowed(sctx, tctx, perm, name);
freecon(tctx);
return allowed;
}
2.20 IPCThreadState::setTheContextObject
主要作用为:
1.设置当前线程的上下文对象。
void IPCThreadState::setTheContextObject(const sp<BBinder>& obj)
{
the_context_object = obj;//设置当前ipc线程的上下文对象,BBbinder,obj是ServiceManager类对象。
}
2.21 ProcessState::becomeContextManager
主要作用是:
1.告诉驱动注册成为binder的管理者。
bool ProcessState::becomeContextManager()
{
AutoMutex _l(mLock);
flat_binder_object obj {
.flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX,//表示请求安全上下文。FLAT_BINDER_FLAG_TXN_SECURITY_CTX
};
int result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR_EXT, &obj);//通知驱动,将ServiecManager设置成为安全的上下文
// fallback to original method
if (result != 0) {
android_errorWriteLog(0x534e4554, "121035042");
int unused = 0;
//如果安全上下文设置失败,继续使用原有的BINDER_SET_CONTEXT_MGR来设置为管理者
result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR, &unused);
}
if (result == -1) {
ALOGE("Binder ioctl to become context manager failed: %s\n", strerror(errno));
}
return result == 0;
}
2.22 binder_ioctl
主要作用为:
1.取出SM进程对应的porc对象。
2.设置binder的context管理者,也就是servicemanager称为守护进程。
3.在驱动中创建一个驱动中的binder实体对象,并赋值给binder_context_mgr_node。
//设置成为binder管理者
//ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR_EXT, &obj);//通知驱动,将ServiecManager设置成为安全的上下文
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;//binder线程
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
trace_binder_ioctl(cmd, arg);
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//条件成立,不休眠,返回值是0
if (ret)
goto err_unlocked;
binder_lock(__func__);
thread = binder_get_thread(proc);//获取binder_thread
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_SET_CONTEXT_MGR://设置binder的context管理者,也就是servicemanager称为守护进程
if (binder_context_mgr_node != NULL) {//如果不为空,代表已经设置过了
printk(KERN_ERR "binder: BINDER_SET_CONTEXT_MGR already set\n");
ret = -EBUSY;
goto err;
}
ret = security_binder_set_context_mgr(proc->tsk);
if (ret < 0)
goto err;
if (binder_context_mgr_uid != -1) {//binder_context_mgr_uid不为-1,代表已经有进程注册过管理者了
if (binder_context_mgr_uid != current->cred->euid) {//如果和当前进程id不等,则错误
printk(KERN_ERR "binder: BINDER_SET_"
"CONTEXT_MGR bad uid %d != %d\n",
current->cred->euid,
binder_context_mgr_uid);
ret = -EPERM;
goto err;
}
}
else//未被注册过
binder_context_mgr_uid = current->cred->euid;//则设置为当前进程的用户id
binder_context_mgr_node = binder_new_node(proc, NULL, NULL);//创建一个binder对象,并赋值给binder_context_mgr_node
if (binder_context_mgr_node == NULL) {
ret = -ENOMEM;
goto err;
}
binder_context_mgr_node->local_weak_refs++;
binder_context_mgr_node->local_strong_refs++;
binder_context_mgr_node->has_strong_ref = 1;
binder_context_mgr_node->has_weak_ref = 1;
break;
}
ret = 0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
binder_unlock(__func__);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret && ret != -ERESTARTSYS)
printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
trace_binder_ioctl_done(ret);
return ret;
}
2.23 binder_new_node
1.创建驱动中对应的sm的binder实体对象,并用红黑树保存,此时sm的binder对象作为红黑树的第一个节点。
static struct binder_node *binder_new_node(struct binder_proc *proc,
void __user *ptr,
void __user *cookie)
{
struct rb_node **p = &proc->nodes.rb_node;//该进程(SM进程)的nodes会用红黑树保存一系列的binder实体对象(binder_node)和binder引用对象(binder_ref)。
struct rb_node *parent = NULL;
struct binder_node *node;//binder_node代表是binder实体
while (*p) {//proc->nodes.rb_node,代表如果当前进程的红黑树节点不为空,则将当前节点插入红黑树的指定位置。那么我们知道sm刚启动肯定是空的,
parent = *p;
node = rb_entry(parent, struct binder_node, rb_node);//返回该节点对应的数据类型
if (ptr < node->ptr)
p = &(*p)->rb_left;
else if (ptr > node->ptr)
p = &(*p)->rb_right;
else
return NULL;
}
node = kzalloc(sizeof(*node), GFP_KERNEL);//为binder实体(SM的驱动的binder实体)对象分配内核空间
if (node == NULL)
return NULL;
binder_stats_created(BINDER_STAT_NODE);
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);//将当前binder对象插入红黑树
node->debug_id = ++binder_last_id;
node->proc = proc;//binder实体的节点中保存此binder实体对应的进程proc对象,即sm的驱动中的binder实体保存sm进程的proc对象。
node->ptr = ptr;//此处是null
node->cookie = cookie;//此处是null
node->work.type = BINDER_WORK_NODE;
INIT_LIST_HEAD(&node->work.entry);//创建该binder实体对象的work头节点
INIT_LIST_HEAD(&node->async_todo);//创建该binder实体对象todo头节点
binder_debug(BINDER_DEBUG_INTERNAL_REFS,
"binder: %d:%d node %d u%p c%p created\n",
proc->pid, current->pid, node->debug_id,
node->ptr, node->cookie);
return node;
}
2.24 Looper::prepare
此处便涉及到looper的机制了。为了完全透彻的理解,本人也将looper进行了剖析。
主要作用为:
1.创建一个looper对象。
sp<Looper> Looper::prepare(int opts) {//opt是false
bool allowNonCallbacks = opts & PREPARE_ALLOW_NON_CALLBACKS;//表示是否有回调,此时是false
sp<Looper> looper = Looper::getForThread();
if (looper == nullptr) {
looper = new Looper(allowNonCallbacks);//创建Looper,allowNonCallbacks是false
Looper::setForThread(looper);//将新的looper设置给gTLSKey
}
if (looper->getAllowNonCallbacks() != allowNonCallbacks) {
ALOGW("Looper already prepared for this thread with a different value for the "
"LOOPER_PREPARE_ALLOW_NON_CALLBACKS option.");
}
return looper;//返回一个looper对象
}
2.25 Looper::getForThread
sp<Looper> Looper::getForThread() {
int result = pthread_once(& gTLSOnce, initTLSKey);//pthread_once保证initTLSKey函数在此进程只执行一次
//static pthread_once_t gTLSOnce = PTHREAD_ONCE_INIT;
LOG_ALWAYS_FATAL_IF(result != 0, "pthread_once failed");
return (Looper*)pthread_getspecific(gTLSKey);
}
2.26 Looper::initTLSKey
void Looper::initTLSKey() {
int error = pthread_key_create(&gTLSKey, threadDestructor);分配用于表示进程中线程特定数据的键,键对进程中的所有线程来说是全局的.
//pthread_key_create第一个参数为指向一个键值的指针,第二个参数指明了一个destructor函数,
//如果这个参数不为空,那么当每个线程结束时,系统将调用这个函数来释放绑定在这个键上的内存块。
//key_create_value成功返回0,其他是失败
LOG_ALWAYS_FATAL_IF(error != 0, "Could not allocate TLS key: %s", strerror(error));
}
2.27 Looper::Looper
Looper::Looper(bool allowNonCallbacks)
: mAllowNonCallbacks(allowNonCallbacks),
mSendingMessage(false),
mPolling(false),
mEpollRebuildRequired(false),
mNextRequestSeq(0),
mResponseIndex(0),
mNextMessageUptime(LLONG_MAX) {
mWakeEventFd.reset(eventfd(0, EFD_NONBLOCK | EFD_CLOEXEC));//重置唤醒事件fd,mWakeEventFd文件描述符为0
LOG_ALWAYS_FATAL_IF(mWakeEventFd.get() < 0, "Could not make wake event fd: %s", strerror(errno));
AutoMutex _l(mLock);
rebuildEpollLocked();
}
2.28 Looper::rebuildEpollLocked
Looper本质是通过epoll机制来实现的功能。
主要作用为:
1.此处先创建epoll。
2.将唤醒事件的fd添加到epoll_ctl下监听,如果唤醒事件的fd下有数据,则返回eventItem
void Looper::rebuildEpollLocked() {
/**
if (mEpollFd >= 0) {//如果有老的epoll,则关闭老的epoll单例
#if DEBUG_CALLBACKS
ALOGD("%p ~ rebuildEpollLocked - rebuilding epoll set", this);
#endif
mEpollFd.reset();
}
*/
mEpollFd.reset(epoll_create1(EPOLL_CLOEXEC));//创建一个epooll对象赋值给mEpollFd,EPOLL_CLOEXEC代表进程被替换时会关闭打开的文件描述符
struct epoll_event eventItem;//epoll_event结构体
memset(& eventItem, 0, sizeof(epoll_event));
eventItem.events = EPOLLIN;//代表是可读事件
eventItem.data.fd = mWakeEventFd.get();//保存mWakeEventFd的fd
int result = epoll_ctl(mEpollFd.get(), EPOLL_CTL_ADD, mWakeEventFd.get(), &eventItem);//将唤醒事件的fd添加到epoll_ctl下监听,如果唤醒事件的fd下有数据,则返回eventItem
LOG_ALWAYS_FATAL_IF(result != 0, "Could not add wake event fd to epoll instance: %s",
strerror(errno));
/**刚开始没有
for (size_t i = 0; i < mRequests.size(); i++) {
const Request& request = mRequests.valueAt(i);
struct epoll_event eventItem;
request.initEventItem(&eventItem);
int epollResult = epoll_ctl(mEpollFd.get(), EPOLL_CTL_ADD, request.fd, &eventItem);
if (epollResult < 0) {
ALOGE("Error adding epoll events for fd %d while rebuilding epoll set: %s",
request.fd, strerror(errno));
}
}
*/
}
2.29 Looper::setForThread
1.将新的looper设置给gTLSKey,这个gTLSKey对于进程中的所有线程都是全局的,共享的。
void Looper::setForThread(const sp<Looper>& looper) {
sp<Looper> old = getForThread(); // also has side-effect of initializing TLS
if (looper != nullptr) {
looper->incStrong((void*)threadDestructor);//
}
pthread_setspecific(gTLSKey, looper.get());//将新的looper设置给gTLSKey,这个gTLSKey对于进程中的所有线程都是全局的。
if (old != nullptr) {
old->decStrong((void*)threadDestructor);
}
}
2.30 sp<BinderCallback> setupTo
主要作用为:
1.创建了BinderCallback对象。
2.和binder驱动通信,告诉bingder驱动要进入消息循环了。
3.将/dev/binder驱动的fd,添加到epoll中,用于监听是否有来自驱动的消息。
static sp<BinderCallback> setupTo(const sp<Looper>& looper) {
sp<BinderCallback> cb = sp<BinderCallback>::make();//创建了BinderCallback对象
int binder_fd = -1;
IPCThreadState::self()->setupPolling(&binder_fd);//告诉binder驱动要进入消息循环了。设置binder_fd为/dev/binder的fd
int ret = looper->addFd(binder_fd,
Looper::POLL_CALLBACK,
Looper::EVENT_INPUT,
cb,
nullptr /*data*/);//将/dev/binder驱动的fd,添加到epoll中,用于监听是否有来自驱动的消息。
return cb;
}
2.31 IPCThreadState::setupPolling
主要作用为:
1.向binder驱动发送消息是通过mout的Parcel类对象来进行通信的。所以此处的作用是通过填充和binder驱动通信的结构体,从而向binder驱动发送命令协议BC_ENTER_LOOPER,告诉binder驱动"本线程要进入循环状态了。
2.填充完mout结构体后,向驱动发送命令。
status_t IPCThreadState::setupPolling(int* fd)
{
if (mProcess->mDriverFD < 0) {
return -EBADF;
}
mOut.writeInt32(BC_ENTER_LOOPER);//mOut是一个Parcel类对象,向设备注册派生的looper线程,
//向binder驱动发送命令协议BC_ENTER_LOOPER,告诉binder驱动"本线程要进入循环状态了"
flushCommands();//向驱动发送命令
*fd = mProcess->mDriverFD;//设置fd为/dev/binder的fd
return 0;
}
2.32 Parcel::writeInt32
status_t Parcel::writeInt32(int32_t val)
{
return writeAligned(val);
}
template<class T>
status_t Parcel::writeAligned(T val) {
static_assert(PAD_SIZE_UNSAFE(sizeof(T)) == sizeof(T));
if ((mDataPos+sizeof(val)) <= mDataCapacity) {//如果已经有的mDataPos位置加上数据的大小小于容量
restart_write:
*reinterpret_cast<T*>(mData+mDataPos) = val;//则将数据设置进去
return finishWrite(sizeof(val));
}
status_t err = growData(sizeof(val));
if (err == NO_ERROR) goto restart_write;
return err;
}
2.33 IPCThreadState::flushCommands
1.此时执行的是BC_ENTER_LOOPER,告诉binder驱动本线程要进入循环状态了。
void IPCThreadState::flushCommands()
{
if (mProcess->mDriverFD < 0)
return;
talkWithDriver(false);
//刷新可能会导致执行写后refcount递减,进而可能导致BC_RELEASE/BC_DECREFS在mOut中排队。所以,如果需要的话,再次冲洗。
if (mOut.dataSize() > 0) {
talkWithDriver(false);
}
if (mOut.dataSize() > 0) {
ALOGW("mOut.dataSize() > 0 after flushCommands()");
}
}
2.34 IPCThreadState::talkWithDriver
1.此时执行的是BC_ENTER_LOOPER,告诉binder驱动本线程要进入循环状态了。
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD < 0) {
return -EBADF;
}
binder_write_read bwr;
//struct binder_write_read {
//binder_size_t write_size;//要写入的字节数,write_buffer的总字节数
//binder_size_t write_consumed;//驱动程序占用的字节数,write_buffer已消费的字节数
//binder_uintptr_t write_buffer;//写缓冲数据的指针
//binder_size_t read_size;//要读的字节数,read_buffer的总字节数
//binder_size_t read_consumed;//驱动程序占用的字节数,read_buffer已消费的字节数
//binder_uintptr_t read_buffer;//读缓存数据的指针
//};
// Is the read buffer empty?
const bool needRead = mIn.dataPosition() >= mIn.dataSize();//min是驱动发送给serviceManager的消息,查看needRead是否有来自驱动的消息
//当我们正在从min中读取数据,或者调用方打算读取下一条数据(doReceive为true时),我们不会写入任何内容。
//此时doReceive是false,且无数据可读,所以我们是会写入数据的
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
/**
if (doReceive && needRead) {//当我们正在读的时候。
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
}*/
else {//当不读的时候,设置读的大小和buffer为0
bwr.read_size = 0;
bwr.read_buffer = 0;
}
/**log不看
IF_LOG_COMMANDS() {
TextOutput::Bundle _b(alog);
if (outAvail != 0) {
alog << "Sending commands to driver: " << indent;
const void* cmds = (const void*)bwr.write_buffer;
const void* end = ((const uint8_t*)cmds)+bwr.write_size;
alog << HexDump(cmds, bwr.write_size) << endl;
while (cmds < end) cmds = printCommand(alog, cmds);
alog << dedent;
}
alog << "Size of receive buffer: " << bwr.read_size
<< ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
}
*/
// 如果读缓冲区和写缓冲区都为0,代表无事可做,立即返回,此时write_size中有数据。
/**
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
*/
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
do {
#if defined(__ANDROID__)
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)//向binder驱动不断的读写
err = NO_ERROR;
else
err = -errno;
#else
err = INVALID_OPERATION;
#endif
/**
if (mProcess->mDriverFD < 0) {
err = -EBADF;
}
*/
} while (err == -EINTR);
if (err >= NO_ERROR) {//代表读取成功
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())//如果驱动消费的数据大小小于mout的大小,则说明驱动没有消费mout数据
LOG_ALWAYS_FATAL("Driver did not consume write buffer. "
"err: %s consumed: %zu of %zu",
statusToString(err).c_str(),
(size_t)bwr.write_consumed,
mOut.dataSize());
else {//代表mout被正确消费
mOut.setDataSize(0);//重置数据大小为0
processPostWriteDerefs();//主要是将写的引用计数减少1,释放
}
}
if (bwr.read_consumed > 0) {//如果驱动读的数据大小大于0
mIn.setDataSize(bwr.read_consumed);//设置mIn的大小
mIn.setDataPosition(0);//设置min数据起始位置
}
return NO_ERROR;
}
return err;
}
2.35 binder_ioctl
1.驱动层,陷入内核态。
2.驱动将数据从用户空间拷贝到内核空间。然后解析数据。
3.执行BC_ENTER_LOOPER,告诉binder驱动本线程要进入循环状态了。
//ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr),此时mout数据是BC_ENTER_LOOPER分析
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;//取出此fd(/dev/binder驱动)的proc对象,一个驱动节点对应一个proc对象
struct binder_thread *thread;//binder线程
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;//是bwr
trace_binder_ioctl(cmd, arg);
ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);//不休眠
if (ret)
goto err_unlocked;
binder_lock(__func__);
thread = binder_get_thread(proc);//获取当前线程binder_thread
if (thread == NULL) {
ret = -ENOMEM;
goto err;
}
switch (cmd) {
case BINDER_WRITE_READ: {
struct binder_write_read bwr;
if (size != sizeof(struct binder_write_read)) {//查看大小是否正确
ret = -EINVAL;
goto err;
}
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {//从用户空间复制数据到内核空间
//第一个参数to是内核空间的数据目标地址指针,
//第二个参数from是用户空间的数据源地址指针,
//第三个参数n是数据的长度。
ret = -EFAULT;
goto err;
}
if (bwr.write_size > 0) {//当写缓存有数据的时候,执行写操作
ret = binder_thread_write(proc, thread, (void __user *)bwr.write_buffer, bwr.write_size, &bwr.write_consumed);
//参数分析:
//proc代表sm对象的proc
//thread为此sm进程的binder线程
//bwr.write_buffer,内核数据的起始地址
//write_size,数据大小
//write_consumed,驱动程序已消费的数据大小
trace_binder_write_done(ret);
if (ret < 0) {//如果写失败,再将bwr的值写回给ubuf
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
//第一个参数是用户空间的指针,
//第二个参数是内核空间指针,
//n表示从内核空间向用户空间拷贝数据的字节数
ret = -EFAULT;
goto err;
}
}
/**
if (bwr.read_size > 0) {//当读缓存有数据的时候,执行读操作,此时读缓存无数据
ret = binder_thread_read(proc, thread, (void __user *)bwr.read_buffer, bwr.read_size, &bwr.read_consumed, filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto err;
}
}
*/
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {//将此内核空间数据,拷贝到ubuf中,此时是写的消费的大小write_consumed变成了4字节。
//第一个参数是用户空间的指针,
//第二个参数是内核空间指针,
//n表示从内核空间向用户空间拷贝数据的字节数
ret = -EFAULT;
goto err;
}
break;
}
ret = 0;
err:
if (thread)
thread->looper &= ~BINDER_LOOPER_STATE_NEED_RETURN;
binder_unlock(__func__);
wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
if (ret && ret != -ERESTARTSYS)
printk(KERN_INFO "binder: %d:%d ioctl %x %lx returned %d\n", proc->pid, current->pid, cmd, arg, ret);
err_unlocked:
trace_binder_ioctl_done(ret);
return ret;
}
2.36 binder_thread_write
1.取出消息是BC_ENTER_LOOPER,驱动层则设置当前线程BINDER_LOOPER_STATE_ENTERED。
int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,
void __user *buffer, int size, signed long *consumed)
//参数分析:
//proc代表sm对象的proc
//thread为此sm进程的binder线程
//bwr.write_buffer,内核数据的起始地址,数据是BC_ENTER_LOOPER
//write_size,4字节,数据大小
//consumed=0,驱动程序已消费的数据大小
{
uint32_t cmd;
void __user *ptr = buffer + *consumed;//首地址+0,即是写buffer首地址。
void __user *end = buffer + size;//buffer的尾地址。
while (ptr < end && thread->return_error == BR_OK) {
if (get_user(cmd, (uint32_t __user *)ptr))//从写buffer中获取命令给cmd,即此时是BC_ENTER_LOOPER
return -EFAULT;
ptr += sizeof(uint32_t);//让buffer的地址跳过BC_ENTER_LOOPER,因为buffer中可能还有其他数据。此时是没数据了
trace_binder_command(cmd);
if (_IOC_NR(cmd) < ARRAY_SIZE(binder_stats.bc)) {//记录信息
binder_stats.bc[_IOC_NR(cmd)]++;
proc->stats.bc[_IOC_NR(cmd)]++;
thread->stats.bc[_IOC_NR(cmd)]++;
}
switch (cmd) {
case BC_ENTER_LOOPER:
/**
if (thread->looper & BINDER_LOOPER_STATE_REGISTERED) {//如果此looper已经注册过,则错误
thread->looper |= BINDER_LOOPER_STATE_INVALID;
binder_user_error("binder: %d:%d ERROR:"
" BC_ENTER_LOOPER called after "
"BC_REGISTER_LOOPER\n",
proc->pid, thread->pid);
}
*/
thread->looper |= BINDER_LOOPER_STATE_ENTERED;//设置为此binder线程已经注册过了。
break;
}
*consumed = ptr - buffer;//已消费的字节大小,此时为4字节
}
return 0;
}
2.37 Looper::addFd
1.将/dev/binder的fd添加到epoll中进行监听,是否有可读事件。
int Looper::addFd(int fd, int ident, int events, Looper_callbackFunc callback, void* data) {
//此时传入的参数是
//fd是/dev/binder的fd
//ident是POLL_CALLBACK //代表Looper_pollOnce()和Looper_polAll()的会执行了一个或多个回调。
//events是EVENT_INPUT ///dev/binder文件描述符中存在可读事件,然后会执行回调函数。
//callback是BinderCallback
//data是nullptr
return addFd(fd, ident, events, callback ? new SimpleLooperCallback(callback) : nullptr, data);
}
SimpleLooperCallback::SimpleLooperCallback(Looper_callbackFunc callback) :
mCallback(callback) {
}
int Looper::addFd(int fd, int ident, int events, const sp<LooperCallback>& callback, void* data) {
/**
if (!callback.get()) {
if (! mAllowNonCallbacks) {
ALOGE("Invalid attempt to set NULL callback but not allowed for this looper.");
return -1;
}
if (ident < 0) {
ALOGE("Invalid attempt to set NULL callback with ident < 0.");
return -1;
}
}
*/
else
{
ident = POLL_CALLBACK;
}
{ // acquire lock
AutoMutex _l(mLock);
Request request;
request.fd = fd;//此时fd是/dev/binder的fd
request.ident = ident;//ident是POLL_CALLBACK,代表Looper_pollOnce()和Looper_polAll()的会执行了一个或多个回调
request.events = events;//events是EVENT_INPUT 代表dev/binder文件描述符中存在可读事件,然后会执行回调函数。
request.seq = mNextRequestSeq++;//seq值是0,mNextRequestSeq值变成1
request.callback = callback;//BinderCallback
request.data = data;//data是空
/**if (mNextRequestSeq == -1) mNextRequestSeq = 0;*/ // reserve sequence number -1
struct epoll_event eventItem;
request.initEventItem(&eventItem);//初始化epoll_event,用于监听此fd是否有可读事件,如果有,则执行回调函数
ssize_t requestIndex = mRequests.indexOfKey(fd);//此时此/dev/binder的fd应该不存在
if (requestIndex < 0) {
int epollResult = epoll_ctl(mEpollFd.get(), EPOLL_CTL_ADD, fd, &eventItem);
//将/dev/binder的fd,加入epoll,监听此设备节点下是否有可读事件
/**
if (epollResult < 0) {
ALOGE("Error adding epoll events for fd %d: %s", fd, strerror(errno));
return -1;
}
*/
mRequests.add(fd, request);//添加到容器中。
}
/**
else {
int epollResult = epoll_ctl(mEpollFd.get(), EPOLL_CTL_MOD, fd, &eventItem);
if (epollResult < 0) {
if (errno == ENOENT) {
// Tolerate ENOENT because it means that an older file descriptor was
// closed before its callback was unregistered and meanwhile a new
// file descriptor with the same number has been created and is now
// being registered for the first time. This error may occur naturally
// when a callback has the side-effect of closing the file descriptor
// before returning and unregistering itself. Callback sequence number
// checks further ensure that the race is benign.
//
// Unfortunately due to kernel limitations we need to rebuild the epoll
// set from scratch because it may contain an old file handle that we are
// now unable to remove since its file descriptor is no longer valid.
// No such problem would have occurred if we were using the poll system
// call instead, but that approach carries others disadvantages.
#if DEBUG_CALLBACKS
ALOGD("%p ~ addFd - EPOLL_CTL_MOD failed due to file descriptor "
"being recycled, falling back on EPOLL_CTL_ADD: %s",
this, strerror(errno));
#endif
epollResult = epoll_ctl(mEpollFd.get(), EPOLL_CTL_ADD, fd, &eventItem);
if (epollResult < 0) {
ALOGE("Error modifying or adding epoll events for fd %d: %s",
fd, strerror(errno));
return -1;
}
scheduleEpollRebuildLocked();
} else {
ALOGE("Error modifying epoll events for fd %d: %s", fd, strerror(errno));
return -1;
}
}
mRequests.replaceValueAt(requestIndex, request);
}
*/
} // release lock
return 1;
}
1.初始化eventItem结构体。此结构体当dev/hwbinder节点下有数据可读时,会返回eventItem结构体的消息。
void Looper::Request::initEventItem(struct epoll_event* eventItem) const {
int epollEvents = 0;
if (events & EVENT_INPUT) epollEvents |= EPOLLIN;//如果是监听可读事件,则设置为可读
if (events & EVENT_OUTPUT) epollEvents |= EPOLLOUT;//如果是监听可写事件,则设置为可写
memset(eventItem, 0, sizeof(epoll_event)); // zero out unused members of data field union
eventItem->events = epollEvents;
eventItem->data.fd = fd;//是dev/binder驱动的fd。代表监听此fd是否有可读事件,如果有,则执行回调函数
}
2.38 ClientCallbackCallback::setupTo
1.主要是通过一个定时器,做定时的处理,避免超时.
static sp<ClientCallbackCallback> setupTo(const sp<Looper>& looper, const sp<ServiceManager>& manager) {
sp<ClientCallbackCallback> cb = sp<ClientCallbackCallback>::make(manager);
int fdTimer = timerfd_create(CLOCK_MONOTONIC, 0 /*flags*/);//timerfd_create() 创建一个新的计时器对象,并返回引用该计时器的文件描述符
//CLOCK_MONOTONIC代表。一个不可设置的单调递增时钟,它从过去的某个未指定点测量时间,系统启动后不会改变,且CLOCK_MONOTONIC 时钟不测量系统挂起时的时间。
LOG_ALWAYS_FATAL_IF(fdTimer < 0, "Failed to timerfd_create: fd: %d err: %d", fdTimer, errno);
itimerspec timespec {//此处表示计时器启动后,将会在5S后报告一次,然后每隔5S报告一次;
.it_interval = {
.tv_sec = 5,
.tv_nsec = 0,
},
.it_value = {
.tv_sec = 5,
.tv_nsec = 0,
},
};
int timeRes = timerfd_settime(fdTimer, 0 /*flags*/, ×pec, nullptr);
//此函数用于设置新的超时时间,并开始计时。
//参数flags为1代表设置的是绝对时间;为0代表相对时间。
//timespec需要设置的时间
LOG_ALWAYS_FATAL_IF(timeRes < 0, "Failed to timerfd_settime: res: %d err: %d", timeRes, errno);
int addRes = looper->addFd(fdTimer,
Looper::POLL_CALLBACK,
Looper::EVENT_INPUT,
cb,
nullptr);//将计时器添加到epoll中,并传递cb,此时是每隔5秒执行一次
LOG_ALWAYS_FATAL_IF(addRes != 1, "Failed to add client callback FD to Looper");
return cb;
}
2.39 pollAll
1.让当前线程阻塞,等待监听的fd(驱动)有消息的到来。
inline int pollAll(int timeoutMillis) {//timeoutMillis值是-1
return pollAll(timeoutMillis, nullptr, nullptr, nullptr);
}
int Looper::pollAll(int timeoutMillis, int* outFd, int* outEvents, void** outData) {
if (timeoutMillis <= 0) {
int result;
do {
result = pollOnce(timeoutMillis, outFd, outEvents, outData);
} while (result == POLL_CALLBACK);
return result;
}
/**此时不执行
else {
nsecs_t endTime = systemTime(SYSTEM_TIME_MONOTONIC)
+ milliseconds_to_nanoseconds(timeoutMillis);
for (;;) {
int result = pollOnce(timeoutMillis, outFd, outEvents, outData);
if (result != POLL_CALLBACK) {
return result;
}
nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
timeoutMillis = toMillisecondTimeoutDelay(now, endTime);
if (timeoutMillis == 0) {
return POLL_TIMEOUT;
}
}
}
*/
}
2.40 pollOnce
1.刚开始,无消息时,会阻塞在这里,等待有消息的到来。
int Looper::pollOnce(int timeoutMillis, int* outFd, int* outEvents, void** outData) {
int result = 0;
for (;;) {
/**
while (mResponseIndex < mResponses.size()) {//此时mResponseIndex=0,mResponses.size也等于0
const Response& response = mResponses.itemAt(mResponseIndex++);
int ident = response.request.ident;
if (ident >= 0) {
int fd = response.request.fd;
int events = response.events;
void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - returning signalled identifier %d: "
"fd=%d, events=0x%x, data=%p",
this, ident, fd, events, data);
#endif
if (outFd != nullptr) *outFd = fd;
if (outEvents != nullptr) *outEvents = events;
if (outData != nullptr) *outData = data;
return ident;
}
}
*/
/**
if (result != 0) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - returning result %d", this, result);
#endif
if (outFd != nullptr) *outFd = 0;
if (outEvents != nullptr) *outEvents = 0;
if (outData != nullptr) *outData = nullptr;
return result;
}
*/
result = pollInner(timeoutMillis);//阻塞在此处
}
}
2.41 pollInner
1.阻塞在此处 ,等待监听的fd中有消息。主要是监听binder驱动是否有可读消息。
int Looper::pollInner(int timeoutMillis) {
/**
if (timeoutMillis != 0 && mNextMessageUptime != LLONG_MAX) {
nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
int messageTimeoutMillis = toMillisecondTimeoutDelay(now, mNextMessageUptime);
if (messageTimeoutMillis >= 0
&& (timeoutMillis < 0 || messageTimeoutMillis < timeoutMillis)) {
timeoutMillis = messageTimeoutMillis;
}
}
*/
// Poll.
int result = POLL_WAKE;
mResponses.clear();
mResponseIndex = 0;
// We are about to idle.
mPolling = true;
struct epoll_event eventItems[EPOLL_MAX_EVENTS];
int eventCount = epoll_wait(mEpollFd.get(), eventItems, EPOLL_MAX_EVENTS, timeoutMillis);//阻塞在此处
//等待监听的fd中有消息。主要是监听binder驱动是否有可读消息。
/**
// No longer idling.
mPolling = false;
// Acquire lock.
mLock.lock();
// Rebuild epoll set if needed.
if (mEpollRebuildRequired) {
mEpollRebuildRequired = false;
rebuildEpollLocked();
goto Done;
}
// Check for poll error.
if (eventCount < 0) {
if (errno == EINTR) {
goto Done;
}
ALOGW("Poll failed with an unexpected error: %s", strerror(errno));
result = POLL_ERROR;
goto Done;
}
// Check for poll timeout.
if (eventCount == 0) {
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - timeout", this);
#endif
result = POLL_TIMEOUT;
goto Done;
}
// Handle all events.
#if DEBUG_POLL_AND_WAKE
ALOGD("%p ~ pollOnce - handling events from %d fds", this, eventCount);
#endif
for (int i = 0; i < eventCount; i++) {
int fd = eventItems[i].data.fd;
uint32_t epollEvents = eventItems[i].events;
if (fd == mWakeEventFd.get()) {
if (epollEvents & EPOLLIN) {
awoken();
} else {
ALOGW("Ignoring unexpected epoll events 0x%x on wake event fd.", epollEvents);
}
} else {
ssize_t requestIndex = mRequests.indexOfKey(fd);
if (requestIndex >= 0) {
int events = 0;
if (epollEvents & EPOLLIN) events |= EVENT_INPUT;
if (epollEvents & EPOLLOUT) events |= EVENT_OUTPUT;
if (epollEvents & EPOLLERR) events |= EVENT_ERROR;
if (epollEvents & EPOLLHUP) events |= EVENT_HANGUP;
pushResponse(events, mRequests.valueAt(requestIndex));
} else {
ALOGW("Ignoring unexpected epoll events 0x%x on fd %d that is "
"no longer registered.", epollEvents, fd);
}
}
}
Done: ;
// Invoke pending message callbacks.
mNextMessageUptime = LLONG_MAX;
while (mMessageEnvelopes.size() != 0) {
nsecs_t now = systemTime(SYSTEM_TIME_MONOTONIC);
const MessageEnvelope& messageEnvelope = mMessageEnvelopes.itemAt(0);
if (messageEnvelope.uptime <= now) {
// Remove the envelope from the list.
// We keep a strong reference to the handler until the call to handleMessage
// finishes. Then we drop it so that the handler can be deleted *before*
// we reacquire our lock.
{ // obtain handler
sp<MessageHandler> handler = messageEnvelope.handler;
Message message = messageEnvelope.message;
mMessageEnvelopes.removeAt(0);
mSendingMessage = true;
mLock.unlock();
#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
ALOGD("%p ~ pollOnce - sending message: handler=%p, what=%d",
this, handler.get(), message.what);
#endif
handler->handleMessage(message);
} // release handler
mLock.lock();
mSendingMessage = false;
result = POLL_CALLBACK;
} else {
// The last message left at the head of the queue determines the next wakeup time.
mNextMessageUptime = messageEnvelope.uptime;
break;
}
}
// Release lock.
mLock.unlock();
// Invoke all response callbacks.
for (size_t i = 0; i < mResponses.size(); i++) {
Response& response = mResponses.editItemAt(i);
if (response.request.ident == POLL_CALLBACK) {
int fd = response.request.fd;
int events = response.events;
void* data = response.request.data;
#if DEBUG_POLL_AND_WAKE || DEBUG_CALLBACKS
ALOGD("%p ~ pollOnce - invoking fd event callback %p: fd=%d, events=0x%x, data=%p",
this, response.request.callback.get(), fd, events, data);
#endif
// Invoke the callback. Note that the file descriptor may be closed by
// the callback (and potentially even reused) before the function returns so
// we need to be a little careful when removing the file descriptor afterwards.
int callbackResult = response.request.callback->handleEvent(fd, events, data);
if (callbackResult == 0) {
removeFd(fd, response.request.seq);
}
// Clear the callback reference in the response structure promptly because we
// will not clear the response vector itself until the next poll.
response.request.callback.clear();
result = POLL_CALLBACK;
}
}
return result;
*/
}
至此,serviceManager的启动流程,我们就讲解完毕了。