如何应对Android面试官->进程通信如何注册与获取服务

news2024/11/15 21:04:29

前言


大家好,我是老A;

这个章节继续上一章节继续讲解,主要讲解下 java 层服务的注册与获取、线程池;我们基于 AMS 来看下 java 层是如何获取的;

SystemServer


SystemServer 的启动也是 main 函数,我们进入这个 main 函数看下:

startService

/**
 * The main entry point from zygote.
 */
public static void main(String[] args) {
    new SystemServer().run();
}

里面很简单,就是直接调用了 run 方法,我们进入这个 run 方法看下:

private void run() {
        if (System.currentTimeMillis() < EARLIEST_SUPPORTED_TIME) {
            Slog.w(TAG, "System clock is before 1970; setting to 1970.");
            SystemClock.setCurrentTimeMillis(EARLIEST_SUPPORTED_TIME);
        }
        
        if (!SystemProperties.get("persist.sys.language").isEmpty()) {
            final String languageTag = Locale.getDefault().toLanguageTag();

            SystemProperties.set("persist.sys.locale", languageTag);
            SystemProperties.set("persist.sys.language", "");
            SystemProperties.set("persist.sys.country", "");
            SystemProperties.set("persist.sys.localevar", "");
        }

        Slog.i(TAG, "Entered the Android system server!");
        EventLog.writeEvent(EventLogTags.BOOT_PROGRESS_SYSTEM_RUN, SystemClock.uptimeMillis());

        SystemProperties.set("persist.sys.dalvik.vm.lib.2", VMRuntime.getRuntime().vmLibrary());

        if (SamplingProfilerIntegration.isEnabled()) {
            SamplingProfilerIntegration.start();
            mProfilerSnapshotTimer = new Timer();
            mProfilerSnapshotTimer.schedule(new TimerTask() {
                @Override
                public void run() {
                    SamplingProfilerIntegration.writeSnapshot("system_server", null);
                }
            }, SNAPSHOT_INTERVAL, SNAPSHOT_INTERVAL);
        }

        // Mmmmmm... more memory!
        VMRuntime.getRuntime().clearGrowthLimit();

        VMRuntime.getRuntime().setTargetHeapUtilization(0.8f);

        Build.ensureFingerprintProperty();

        Environment.setUserRequired(true);

        BinderInternal.disableBackgroundScheduling(true);

        android.os.Process.setThreadPriority(
                android.os.Process.THREAD_PRIORITY_FOREGROUND);
        android.os.Process.setCanSelfBackground(false);
        Looper.prepareMainLooper();

        System.loadLibrary("android_servers");

        performPendingShutdown();

        createSystemContext();

        mSystemServiceManager = new SystemServiceManager(mSystemContext);
        LocalServices.addService(SystemServiceManager.class, mSystemServiceManager);

        try {
            startBootstrapServices();
            startCoreServices();
            startOtherServices();
        } catch (Throwable ex) {
            throw ex;
        }

        if (StrictMode.conditionallyEnableDebugLogging()) {
            Slog.i(TAG, "Enabled StrictMode for system server main thread.");
        }

        Looper.loop();
        throw new RuntimeException("Main thread loop unexpectedly exited");
    }

首先会创建 SystemServiceManager(mSystemServiceManager = new SystemServiceManager(mSystemContext);)然后调用 startBootstrapServices 启动一系列相关服务,我们进入这个 startBootstrapServices 方法看下:

private void startBootstrapServices() {
    Installer installer = mSystemServiceManager.startService(Installer.class);

    // Activity manager runs the show.
    // 创建 AMS
    mActivityManagerService = mSystemServiceManager.startService(
            ActivityManagerService.Lifecycle.class).getService();
    mActivityManagerService.setSystemServiceManager(mSystemServiceManager);
    mActivityManagerService.setInstaller(installer);
    
    // ...
}

第一步首先就是创建 AMS 服务,我们进入这个 startService 方法看下,这个 startService 方法就是 SystemServiceManager 中的 startService 方法:

public <T extends SystemService> T startService(Class<T> serviceClass) {
    try {
        final String name = serviceClass.getName();
        Slog.i(TAG, "Starting " + name);
        Trace.traceBegin(Trace.TRACE_TAG_SYSTEM_SERVER, "StartService " + name);

        // Create the service.
        if (!SystemService.class.isAssignableFrom(serviceClass)) {
            throw new RuntimeException("Failed to create " + name
                    + ": service must extend " + SystemService.class.getName());
        }
        final T service;
        try {
            // 通过反射创建对应的 serviceClass 对象,这里创建的就是 LifeCycle 对象
            Constructor<T> constructor = serviceClass.getConstructor(Context.class);
            service = constructor.newInstance(mContext);
        }

        // LifeCycle 添加到 ArrayList<SystemService> 集合中
        mServices.add(service);
        // 调用 LifyCycle 的 onStart 方法
        service.onStart();
        return service;
    } finally {
        Trace.traceEnd(Trace.TRACE_TAG_SYSTEM_SERVER);
    }
}

通过反射创建对应的 serviceClass 对象,这里创建的就是 LifeCycle 对象,并把这个对象添加到 ArrayList集合中,然后调用 LifeCycle 的 onStart 方法;

public static final class Lifecycle extends SystemService {
        private final ActivityManagerService mService;

        public Lifecycle(Context context) {
            super(context);
            mService = new ActivityManagerService(context);
        }

        @Override
        public void onStart() {
            mService.start();
        }

        public ActivityManagerService getService() {
            return mService;
        }
}

可以看到 onStart 方法中调用了 ActivityManagerService 的 onStart 方法,启动了 ActivityManagerService;

setSystemProcess

private void startBootstrapServices() {
    // ...
    
    //
    // Set up the Application instance for the system process and get started.
    mActivityManagerService.setSystemProcess();
}

获取 AMS 服务并调用 onStart 之后,我们接下来看下,AMS 是如何添加到 ServiceManager 中的,我们进入这个 setSystemProcess 方法看下;

public void setSystemProcess() {
        try {
            // 将 AMS 添加到 ServiceManager 中
            ServiceManager.addService(Context.ACTIVITY_SERVICE, this, true);
        } catch (PackageManager.NameNotFoundException e) {
            throw new RuntimeException(
                    "Unable to find android system package", e);
        }
}

可以看到,第一行代码就是把 AMS 添加到 ServiceManager 中,我们来看下 addService 具体是怎么添加的

public static void addService(String name, IBinder service) {
        try {
            getIServiceManager().addService(name, service, false);
        } catch (RemoteException e) {
            Log.e(TAG, "error in addService", e);
        }
}

通过调用 getIServiceManager().addService 方法进行添加,这里分为两步骤,一步是 getIServiceManager,一步是 addService

getIServiceManager()


我们先来看下 getIServiceManager() 方法的具体实现;

private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }

    // Find the service manager
    sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
    return sServiceManager;
}

这里进行了一个单例处理,如果有 ServiceManager 就返回,没有就创建一个 ServiceManager;这里又要分为两步来分析,一步是 ServiceManagerNative.asInterface,一步是 BinderInternal.getContextObject()

BinderInternal.getContextObject()

我们来看下这个参数做了什么;

public static final native IBinder getContextObject();

直接走到了 native 层,这个最终进入的是 android_util_Binder.cpp 文件中的 getContextObject()方法,我们进入这个方法看下:

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
    return javaObjectForIBinder(env, b);
}

这个 self 上一章节我们已经讲过了,这里就是创建了一个 BpBinder,我们接下来看下 javaObjectForIBinder 做了什么;

jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    if (val == NULL) return NULL;

    if (val->checkSubclass(&gBinderOffsets)) {
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
        return object;
    }

    AutoMutex _l(mProxyLock);
    
    // 查找 BinderProxy 对象
    jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
    if (object != NULL) {
        jobject res = jniGetReferent(env, object);
        if (res != NULL) {
            ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res);
            return res;
        }
        LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());
        android_atomic_dec(&gNumProxyRefs);
        val->detachObject(&gBinderProxyOffsets);
        env->DeleteGlobalRef(object);
    }
    // 找不到,就会创建一个 BinderPrxoy 对象
    object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
    if (object != NULL) {
        // 将 BinderProxy 和 BpBinder 进行了一个绑定
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
        val->incStrong((void*)javaObjectForIBinder);

        jobject refObject = env->NewGlobalRef(
                env->GetObjectField(object, gBinderProxyOffsets.mSelf));
        // 将 BinderProxy 保存到 BpBinder 中        
        val->attachObject(&gBinderProxyOffsets, refObject,
                jnienv_to_javavm(env), proxy_cleanup);
                
        sp<DeathRecipientList> drl = new DeathRecipientList;
        drl->incStrong((void*)javaObjectForIBinder);
        env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));

        android_atomic_inc(&gNumProxyRefs);
        incRefsCreated(env);
    }
    // 返回 BinderProxy 
    return object;
}

这个方法首先回去查找 BinderProxy,找不到就会创建一个 BinderProxy 对象,创建之后就会把 BindeProxy 和 BpBinder 进行一个绑定;

所以 BinderInternal.getContextObject() 这个方法最终拿到的就是一个 BinderProxy 对象;

ServiceManagerNative.asInterface

我们接下来看下 asInterface 发生了什么;

static public IServiceManager asInterface(IBinder obj) {
    if(obj == null) {
        return null;
    }
    IActivityManager in = (IActivityManager)obj.queryLocalInterface(descriptor);
    if (in != null) {
        return in;
    }
    return new ServiceManagerProxy(obj);
}

这里直接调用了 BinderProxy 的 queryLocalInterface 方法,我们进入这个方法看下;

public IInterface queryLocalInterface(String descriptor) {
    return null;
}

实际上这个方法直接返回了一个 null,所以直接返回了一个 ServiceManagerProxy 对象;

所以 ServiceManagerNative.asInterface 这个方法最终拿到的是一个 ServiceManagerProxy 对象;

addService


我们接下来看看 addService 的具体实现,ServiceManagerProxy 是 ServiceManagerNative 的一个内部类;

public void addService(String name, IBinder service, boolean allowIsolated)
            throws RemoteException {
        Parcel data = Parcel.obtain();
        Parcel reply = Parcel.obtain();
        data.writeInterfaceToken(IServiceManager.descriptor);
        data.writeString(name);
        data.writeStrongBinder(service);
        data.writeInt(allowIsolated ? 1 : 0);
        mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
        reply.recycle();
        data.recycle();
}

这里主要关心的是 writeStrongBinder 和 transact 这两个方法;

writeStrongBinder

我们来看下这个方法的具体实现

public final void writeStrongBinder(IBinder val) {
    nativeWriteStrongBinder(mNativePtr, val);
}

进入 Parcel.java 中的

private static native void nativeWriteStrongBinder(long nativePtr, IBinder val);

这个对应的就是 android_os_Parcel.cpp 中的

{"nativeWriteStrongBinder",   "(JLandroid/os/IBinder;)V", (void*)android_os_Parcel_writeStrongBinder},
static void android_os_Parcel_writeStrongBinder(JNIEnv* env, jclass clazz, jlong nativePtr, jobject object)
{
    Parcel* parcel = reinterpret_cast<Parcel*>(nativePtr);
    if (parcel != NULL) {
        const status_t err = parcel->writeStrongBinder(ibinderForJavaObject(env, object));
        if (err != NO_ERROR) {
            signalExceptionForError(env, clazz, err);
        }
    }
}

最终调用的时候 Parcel.cpp 中的 writeStrongBinder 方法,我们进入这个方法看下:

status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
    return flatten_binder(ProcessState::self(), val, this);
}

我们进入这个 flatten_binder 看下:

status_t flatten_binder(const sp<ProcessState>& /*proc*/,
    const sp<IBinder>& binder, Parcel* out)
{
    flat_binder_object obj;

    obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    // 首先查询 binder 是否为空
    if (binder != NULL) {
        // 主要是查询是不是 BpBinder
        IBinder *local = binder->localBinder();
        if (!local) {
            BpBinder *proxy = binder->remoteBinder();
            if (proxy == NULL) {
                ALOGE("null proxy");
            }
            const int32_t handle = proxy ? proxy->handle() : 0;
            obj.type = BINDER_TYPE_HANDLE;
            obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
            obj.handle = handle;
            obj.cookie = 0;
        } else {
            obj.type = BINDER_TYPE_BINDER;
            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
            obj.cookie = reinterpret_cast<uintptr_t>(local);
        }
    } else {
        obj.type = BINDER_TYPE_BINDER;
        obj.binder = 0;
        obj.cookie = 0;
    }

    return finish_flatten_binder(binder, obj, out);
}

这个 localBinder 方法有两个实现

image.png

需要看下实现的是 IBinder 那么返回的就是空,实现的 BBinder 返回的就是 this,这里实现的是 BBinder,所以返回的不为空,我们来看 else 的逻辑

这里就是将 binder 保存到了 obj 这个对象中;我们接着看下 finish_flatten_binder 做了什么

inline static status_t finish_flatten_binder(
    const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
    return out->writeObject(flat, false);
}

这里就是进行了序列化的写操作,这个 out 就是前面的 data;

所以这个 writeStrongBinder 方法就是将 AMS 放到 data 中;

transact

我们接下来看下 transact 的具体实现;

这个 mRemote 就是 BinderProxy,所以我们进入这个 BinderProxy 的 transact 方法看下:

public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
    Binder.checkParcel(this, code, data, "Unreasonably large binder buffer");
    return transactNative(code, data, reply, flags);
}

这里又调用到了 native 层的方法,最终调用的是 android_os_BinderProxy_transact 这个方法;

{"transactNative",      "(ILandroid/os/Parcel;Landroid/os/Parcel;I)Z", (void*)android_os_BinderProxy_transact},

我们进入这个方法看下:

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
        jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
    if (dataObj == NULL) {
        jniThrowNullPointerException(env, NULL);
        return JNI_FALSE;
    }

    Parcel* data = parcelForJavaObject(env, dataObj);
    if (data == NULL) {
        return JNI_FALSE;
    }
    Parcel* reply = parcelForJavaObject(env, replyObj);
    if (reply == NULL && replyObj != NULL) {
        return JNI_FALSE;
    }
    // 获取 BpBinder
    IBinder* target = (IBinder*)
        env->GetLongField(obj, gBinderProxyOffsets.mObject);
    if (target == NULL) {
        jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
        return JNI_FALSE;
    }

    ALOGV("Java code calling transact on %p in Java object %p with code %" PRId32 "\n",
            target, obj, code);


    bool time_binder_calls;
    int64_t start_millis;
    if (kEnableBinderSample) {
        // Only log the binder call duration for things on the Java-level main thread.
        // But if we don't
        time_binder_calls = should_time_binder_calls();

        if (time_binder_calls) {
            start_millis = uptimeMillis();
        }
    }

    //printf("Transact from Java code to %p sending: ", target); data->print();
    // 调用 BpBinder 的 transcat 方法
    status_t err = target->transact(code, *data, reply, flags);
    //if (reply) printf("Transact from Java code to %p received: ", target); reply->print();

    if (kEnableBinderSample) {
        if (time_binder_calls) {
            conditionally_log_binder_call(start_millis, target, code);
        }
    }

    if (err == NO_ERROR) {
        return JNI_TRUE;
    } else if (err == UNKNOWN_TRANSACTION) {
        return JNI_FALSE;
    }

    signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
    return JNI_FALSE;
}

首先获取 data 和 reply,然后获取 BpBinder,然后调用 BpBinder 的 transact 方法;我们来看下 BpBinder 中的 transact 是如何实现的;

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

这里直接调用的 IPCThreadState 的 transcat 方法,我们跳转过去看下:

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }
    
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
        #if 0
        if (code == 4) { // relayout
            ALOGI(">>>>>> CALLING transaction 4");
        } else {
            ALOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) { // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif
        
        IF_LOG_TRANSACTIONS() {
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}

这个 flag 有几个值

flags |= TF_ACCEPT_FDS;

TF_ACCEPT_FDS = 0x10:允许回复中包含文件描述符;

TF_ONE_WAY:当前业务是异步的,不需要等待;

TF_ROOT_OBJECT:所包含的内容是根对象;

TF_STATUS_CODE:所包含的内容是 32-bit 的状态值;

writeTransactionData

所以这里执行的是 同步操作,接着会调用 writeTransactionData ,我们进入这个方法看下:

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
        return (mLastError = err);
    }
    // 主要是这里,向 out 写入 BC_TRANSCATION 命令
    mOut.writeInt32(cmd);
    mOut.write(&tr, sizeof(tr));
    
    return NO_ERROR;
}

这个方法核心的地方就是向 out 写入 BC_TRANSCATION 命令;

waitForResponse

我们接着拐回去看 transact 的逻辑,下面调用了 waitForResponse 方法

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = (uint32_t)mIn.readInt32();
        
        IF_LOG_COMMANDS() {
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        
        case BR_ACQUIRE_RESULT:
            {
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        
        case BR_REPLY:
            {
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    
    return err;
}

这个方法做的逻辑还比较多,但是核心的逻辑是 talkWithDriver 和

talkWithDriver

我们来看下 talkWithDriver 的逻辑;

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD <= 0) {
        return -EBADF;
    }
    
    binder_write_read bwr;
    
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    IF_LOG_COMMANDS() {
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }
    
    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        IF_LOG_COMMANDS() {
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
        // 1、通过 ioctol 调用到 Binder 驱动的 binder_ioctol_write_read 进行写(binder_thread_write)的调用
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }
    
    return err;
}

1、通过 ioctol 调用到 Binder 驱动的 binder_ioctl_write_read 进行写(binder_thread_write)的调用,这个方法执行的 case 是 BC_TRANSCATION;

static int binder_thread_write(struct binder_proc *proc,
			struct binder_thread *thread,
			binder_uintptr_t binder_buffer, size_t size,
			binder_size_t *consumed)
{
    switch (cmd) {
        case BC_TRANSACTION:
	case BC_REPLY: {
            struct binder_transaction_data tr;

            if (copy_from_user(&tr, ptr, sizeof(tr)))
		return -EFAULT;
            ptr += sizeof(tr);
            binder_transaction(proc, thread, &tr, cmd == BC_REPLY, 0);
            break;
	}
    }
}

这个case直接调用了 binder_transaction 方法,我们进入这个 binder_transaction 方法看下,这个方法比较长,我们来一步一步的分析下:

static void binder_transaction(struct binder_proc *proc,
			       struct binder_thread *thread,
			       struct binder_transaction_data *tr, int reply,
			       binder_size_t extra_buffers_size)
{
	int ret;
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	binder_size_t *offp, *off_end, *off_start;
	binder_size_t off_min;
	u8 *sg_bufp, *sg_buf_end;
	struct binder_proc *target_proc;
	struct binder_thread *target_thread = NULL;
	struct binder_node *target_node = NULL;
	struct list_head *target_list;
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;
	struct binder_buffer_object *last_fixup_obj = NULL;
	binder_size_t last_fixup_min_off = 0;
	struct binder_context *context = proc->context;

	e = binder_transaction_log_add(&binder_transaction_log);
	e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
	e->from_proc = proc->pid;
	e->from_thread = thread->pid;
	e->target_handle = tr->target.handle;
	e->data_size = tr->data_size;
	e->offsets_size = tr->offsets_size;
	e->context_name = proc->context->name;

	if (reply) {
		in_reply_to = thread->transaction_stack;
		if (in_reply_to == NULL) {
			binder_user_error("%d:%d got reply transaction with no transaction stack\n",
					  proc->pid, thread->pid);
			return_error = BR_FAILED_REPLY;
			goto err_empty_call_stack;
		}
		binder_set_nice(in_reply_to->saved_priority);
		if (in_reply_to->to_thread != thread) {
			binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",
				proc->pid, thread->pid, in_reply_to->debug_id,
				in_reply_to->to_proc ?
				in_reply_to->to_proc->pid : 0,
				in_reply_to->to_thread ?
				in_reply_to->to_thread->pid : 0);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			goto err_bad_call_stack;
		}
		thread->transaction_stack = in_reply_to->to_parent;
		target_thread = in_reply_to->from;
		if (target_thread == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		if (target_thread->transaction_stack != in_reply_to) {
			binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",
				proc->pid, thread->pid,
				target_thread->transaction_stack ?
				target_thread->transaction_stack->debug_id : 0,
				in_reply_to->debug_id);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			target_thread = NULL;
			goto err_dead_binder;
		}
		target_proc = target_thread->proc;
	} else {
           // 因为 cmd == BC_TRANSACTION 所以 reply == false,执行这里的逻辑
		if (tr->target.handle) {
			struct binder_ref *ref;

			ref = binder_get_ref(proc, tr->target.handle, true);
			if (ref == NULL) {
				binder_user_error("%d:%d got transaction to invalid handle\n",
					proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_invalid_target_handle;
			}
			target_node = ref->node;
		} else {
                   // 拿到 Binder 对象,
			target_node = context->binder_context_mgr_node;
			if (target_node == NULL) {
				return_error = BR_DEAD_REPLY;
				goto err_no_context_mgr_node;
			}
		}
		e->to_node = target_node->debug_id;
		target_proc = target_node->proc;
		if (target_proc == NULL) {
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		if (security_binder_transaction(proc->tsk,
						target_proc->tsk) < 0) {
			return_error = BR_FAILED_REPLY;
			goto err_invalid_target_handle;
		}
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
			struct binder_transaction *tmp;

			tmp = thread->transaction_stack;
			if (tmp->to_thread != thread) {
				binder_user_error("%d:%d got new transaction with bad transaction stack, transaction %d has target %d:%d\n",
					proc->pid, thread->pid, tmp->debug_id,
					tmp->to_proc ? tmp->to_proc->pid : 0,
					tmp->to_thread ?
					tmp->to_thread->pid : 0);
				return_error = BR_FAILED_REPLY;
				goto err_bad_call_stack;
			}
			while (tmp) {
				if (tmp->from && tmp->from->proc == target_proc)
					target_thread = tmp->from;
				tmp = tmp->from_parent;
			}
		}
	}
	if (target_thread) {
		e->to_thread = target_thread->pid;
		target_list = &target_thread->todo;
		target_wait = &target_thread->wait;
	} else {
		target_list = &target_proc->todo;
		target_wait = &target_proc->wait;
	}
	e->to_proc = target_proc->pid;

	/* TODO: reuse incoming transaction for reply */
   // 初始化 t
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	binder_stats_created(BINDER_STAT_TRANSACTION);
   // 初始化 tcomplete
	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

	t->debug_id = ++binder_last_id;
	e->debug_id = t->debug_id;

	if (reply)
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "%d:%d BC_REPLY %d -> %d:%d, data %016llx-%016llx size %lld-%lld-%lld\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_thread->pid,
			     (u64)tr->data.ptr.buffer,
			     (u64)tr->data.ptr.offsets,
			     (u64)tr->data_size, (u64)tr->offsets_size,
			     (u64)extra_buffers_size);
	else
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "%d:%d BC_TRANSACTION %d -> %d - node %d, data %016llx-%016llx size %lld-%lld-%lld\n",
			     proc->pid, thread->pid, t->debug_id,
			     target_proc->pid, target_node->debug_id,
			     (u64)tr->data.ptr.buffer,
			     (u64)tr->data.ptr.offsets,
			     (u64)tr->data_size, (u64)tr->offsets_size,
			     (u64)extra_buffers_size);

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = task_euid(proc->tsk);
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);

	trace_binder_transaction(reply, t, target_node);
   // 去 mmap 映射的空间内申请一块空间
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, extra_buffers_size,
		!reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	trace_binder_transaction_alloc_buf(t->buffer);
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	off_start = (binder_size_t *)(t->buffer->data +
				      ALIGN(tr->data_size, sizeof(void *)));
	offp = off_start;
    // 进行数据的拷贝,所以== Binder 的一次数据拷贝就是在这里完成的
	if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
			   tr->data.ptr.buffer, tr->data_size)) {
		binder_user_error("%d:%d got transaction with invalid data ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, (const void __user *)(uintptr_t)
			   tr->data.ptr.offsets, tr->offsets_size)) {
		binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
		binder_user_error("%d:%d got transaction with invalid offsets size, %lld\n",
				proc->pid, thread->pid, (u64)tr->offsets_size);
		return_error = BR_FAILED_REPLY;
		goto err_bad_offset;
	}
	if (!IS_ALIGNED(extra_buffers_size, sizeof(u64))) {
		binder_user_error("%d:%d got transaction with unaligned buffers size, %lld\n",
				  proc->pid, thread->pid,
				  (u64)extra_buffers_size);
		return_error = BR_FAILED_REPLY;
		goto err_bad_offset;
	}
	off_end = (void *)off_start + tr->offsets_size;
	sg_bufp = (u8 *)(PTR_ALIGN(off_end, sizeof(void *)));
	sg_buf_end = sg_bufp + extra_buffers_size;
	off_min = 0;
	for (; offp < off_end; offp++) {
		struct binder_object_header *hdr;
		size_t object_size = binder_validate_object(t->buffer, *offp);

		if (object_size == 0 || *offp < off_min) {
			binder_user_error("%d:%d got transaction with invalid offset (%lld, min %lld max %lld) or object.\n",
					  proc->pid, thread->pid, (u64)*offp,
					  (u64)off_min,
					  (u64)t->buffer->data_size);
			return_error = BR_FAILED_REPLY;
			goto err_bad_offset;
		}

		hdr = (struct binder_object_header *)(t->buffer->data + *offp);
		off_min = *offp + object_size;
		switch (hdr->type) {
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
			struct flat_binder_object *fp;

			fp = to_flat_binder_object(hdr);
			ret = binder_translate_binder(fp, t, thread);
			if (ret < 0) {
				return_error = BR_FAILED_REPLY;
				goto err_translate_failed;
			}
		} break;
		case BINDER_TYPE_HANDLE:
		case BINDER_TYPE_WEAK_HANDLE: {
			struct flat_binder_object *fp;

			fp = to_flat_binder_object(hdr);
			ret = binder_translate_handle(fp, t, thread);
			if (ret < 0) {
				return_error = BR_FAILED_REPLY;
				goto err_translate_failed;
			}
		} break;

		case BINDER_TYPE_FD: {
			struct binder_fd_object *fp = to_binder_fd_object(hdr);
			int target_fd = binder_translate_fd(fp->fd, t, thread,
							    in_reply_to);

			if (target_fd < 0) {
				return_error = BR_FAILED_REPLY;
				goto err_translate_failed;
			}
			fp->pad_binder = 0;
			fp->fd = target_fd;
		} break;
		case BINDER_TYPE_FDA: {
			struct binder_fd_array_object *fda =
				to_binder_fd_array_object(hdr);
			struct binder_buffer_object *parent =
				binder_validate_ptr(t->buffer, fda->parent,
						    off_start,
						    offp - off_start);
			if (!parent) {
				binder_user_error("%d:%d got transaction with invalid parent offset or type\n",
						  proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_bad_parent;
			}
			if (!binder_validate_fixup(t->buffer, off_start,
						   parent, fda->parent_offset,
						   last_fixup_obj,
						   last_fixup_min_off)) {
				binder_user_error("%d:%d got transaction with out-of-order buffer fixup\n",
						  proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_bad_parent;
			}
			ret = binder_translate_fd_array(fda, parent, t, thread,
							in_reply_to);
			if (ret < 0) {
				return_error = BR_FAILED_REPLY;
				goto err_translate_failed;
			}
			last_fixup_obj = parent;
			last_fixup_min_off =
				fda->parent_offset + sizeof(u32) * fda->num_fds;
		} break;
		case BINDER_TYPE_PTR: {
			struct binder_buffer_object *bp =
				to_binder_buffer_object(hdr);
			size_t buf_left = sg_buf_end - sg_bufp;

			if (bp->length > buf_left) {
				binder_user_error("%d:%d got transaction with too large buffer\n",
						  proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_bad_offset;
			}
			if (copy_from_user(sg_bufp,
					   (const void __user *)(uintptr_t)
					   bp->buffer, bp->length)) {
				binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
						  proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_copy_data_failed;
			}
			/* Fixup buffer pointer to target proc address space */
			bp->buffer = (uintptr_t)sg_bufp +
				target_proc->user_buffer_offset;
			sg_bufp += ALIGN(bp->length, sizeof(u64));

			ret = binder_fixup_parent(t, thread, bp, off_start,
						  offp - off_start,
						  last_fixup_obj,
						  last_fixup_min_off);
			if (ret < 0) {
				return_error = BR_FAILED_REPLY;
				goto err_translate_failed;
			}
			last_fixup_obj = bp;
			last_fixup_min_off = 0;
		} break;
		default:
			binder_user_error("%d:%d got transaction with invalid object type, %x\n",
				proc->pid, thread->pid, hdr->type);
			return_error = BR_FAILED_REPLY;
			goto err_bad_object_type;
		}
	}
	if (reply) {
		BUG_ON(t->buffer->async_transaction != 0);
		binder_pop_transaction(target_thread, in_reply_to);
	} else if (!(t->flags & TF_ONE_WAY)) {
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
	} else {
		BUG_ON(target_node == NULL);
		BUG_ON(t->buffer->async_transaction != 1);
		if (target_node->has_async_transaction) {
			target_list = &target_node->async_todo;
			target_wait = NULL;
		} else
			target_node->has_async_transaction = 1;
	}
	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;

err_translate_failed:
err_bad_object_type:
err_bad_offset:
err_bad_parent:
err_copy_data_failed:
	trace_binder_transaction_failed_buffer_release(t->buffer);
	binder_transaction_buffer_release(target_proc, t->buffer, offp);
	t->buffer->transaction = NULL;
	binder_free_buf(target_proc, t->buffer);
err_binder_alloc_buf_failed:
	kfree(tcomplete);
	binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
err_alloc_tcomplete_failed:
	kfree(t);
	binder_stats_deleted(BINDER_STAT_TRANSACTION);
err_alloc_t_failed:
err_bad_call_stack:
err_empty_call_stack:
err_dead_binder:
err_invalid_target_handle:
err_no_context_mgr_node:
	binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
		     "%d:%d transaction failed %d, size %lld-%lld\n",
		     proc->pid, thread->pid, return_error,
		     (u64)tr->data_size, (u64)tr->offsets_size);

	{
		struct binder_transaction_log_entry *fe;

		fe = binder_transaction_log_add(&binder_transaction_log_failed);
		*fe = *e;
	}

	BUG_ON(thread->return_error != BR_OK);
	if (in_reply_to) {
		thread->return_error = BR_TRANSACTION_COMPLETE;
		binder_send_failed_reply(in_reply_to, return_error);
	} else
		thread->return_error = return_error;
}

因为 cmd == BC_TRANSACTION 所以 reply == false,执行 else 的逻辑;

这个方法做了如下处理

  1. 获取 target_node Binder对象
  2. 获取 proc 对象
  3. 保存到 todo,wait 中
  4. 创建 t,tcomplete
  5. 数据的一次拷贝
  6. binder_transcation_binder 转换成 handle
  7. thread_transcatin_sack = t 方便 sm 找到 client
  8. t->work.type = BINDER_WORK_TRANSCATION; 给 ServiceManager 处理事情
  9. tcomplete->type = BINDER_WORK_TRANSCATION_COMPLETE; 给 client 挂起
  10. wake_up_interruptible 唤醒 ServiceManager

我们接下来看下 client 的挂起逻辑;

binder 在处理完 transact 之后,会接着回去,通过 binder_thread 进来的,这个时候有数据了,就会执行 read 操作;

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
    // 放 BR_NOOP
    if (*consumed == 0) {
	if (put_user(BR_NOOP, (uint32_t __user *)ptr))
            return -EFAULT;
	ptr += sizeof(uint32_t);
    }
    
    while (1) {
	uint32_t cmd;
	struct binder_transaction_data tr;
	struct binder_work *w;
	struct binder_transaction *t = NULL;
        // 我们前面已经将数据吸入todo了,所以这里不为空,就会进入这个 list_first_entry 方法获取 w 所以这个 w 也不会为空;这个 w 就是前面说的 BINDER_WORK_TRANSACTION_COMPLETE
	if (!list_empty(&thread->todo)) {
		w = list_first_entry(&thread->todo, struct binder_work,
					 entry);
	}
        //
       
        switch (w->type) {
            case BINDER_WORK_TRANSACTION_COMPLETE: {
		cmd = BR_TRANSACTION_COMPLETE;
		if (put_user(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);

		binder_stat_br(proc, thread, cmd);
		binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
				     "%d:%d BR_TRANSACTION_COMPLETE\n",
				     proc->pid, thread->pid);

		list_del(&w->entry);
		kfree(w);
		binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
            } break;
        }
    }
}

我们前面已经将数据吸入todo了,所以这里不为空,就会进入这个 list_first_entry 方法获取 w 所以这个 w 也不会为空;这个 w 就是前面说的 BINDER_WORK_TRANSACTION_COMPLETE;

进入这个 switch 之后,获取的 cmd 命令就是 BR_TRANSACTION_COMPLETE,获取这个命令之后,就会返回,返回到 talkWithDriver,这个 talkWithDriver 在 waitForResponse 中调用的,我们拐回去看下:

接着在 waitForResponse 中执行 cmd = BR_TRANSACTION_COMPLETE 这个命令;

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    switch (cmd) {
        case BR_TRANSACTION_COMPLETE:
            // reply 这个地方我们是同步,所以这个地方不会进去,继续 while 循环,继续 talkWithDriver 的调用
            if (!reply && !acquireResult) goto finish;
            break;
    }
}

reply 这个地方我们是同步,所以这个地方不会进去,继续 while 循环,继续 talkWithDriver 的调用,又会走到 binder_thread_read 继续读数据,最终执行到 ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread)) 挂起;

到此 client 就被挂起了,在这等待 ServiceManager 处理完成之后对它唤醒;

ServiceManager 处理添加的服务


我们接下来看下 ServiceManager 是如何处理添加的服务的;

要看 ServiceManager 如何处理添加的服务,我们需要回顾前面讲的 ServiceManager 是在哪里挂起的,它的挂起就在 binder_thread_read 方法中的下面这个逻辑

ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));

唤起之后,ServiceManager 要处理的命令就是 BINDER_WORK_TRANSCATION,然后就会走到下面的逻辑:

if (t->buffer->target_node) {
	struct binder_node *target_node = t->buffer->target_node;

	tr.target.ptr = target_node->ptr;
	tr.cookie =  target_node->cookie;
	t->saved_priority = task_nice(current);
	if (t->priority < target_node->min_priority &&
		!(t->flags & TF_ONE_WAY))
            binder_set_nice(t->priority);
	else if (!(t->flags & TF_ONE_WAY) ||
			t->saved_priority > target_node->min_priority)
            binder_set_nice(target_node->min_priority);
            cmd = BR_TRANSACTION;
	} else {
            tr.target.ptr = 0;
            tr.cookie = 0;
            cmd = BR_REPLY;
        }
}

这里获取到的要处理的命令就是 BR_TRANSACTION,通过 binder 驱动将这个命令返回,我们接着来看看下这个命令的解析,直接进入 binder.c 文件中的 binder_parse 方法看下;

int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    switch(cmd) {
        case BR_TRANSACTION: {
            struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
            if ((end - ptr) < sizeof(*txn)) {
                ALOGE("parse: txn too small!\n");
                return -1;
            }
            binder_dump_txn(txn);
            if (func) {
                unsigned rdata[256/4];
                struct binder_io msg;
                struct binder_io reply;
                int res;
                // 初始化 reply
                bio_init(&reply, rdata, sizeof(rdata), 4);
                bio_init_from_txn(&msg, txn);
                // 函数指针,指向的是调用 binder_loop 方法传入的 svcmgr_handler
                res = func(bs, txn, &msg, &reply);
                // 
                binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
            }
            ptr += sizeof(*txn);
            break;
        }
    }
}

这个解析做了三件事情

  1. bio_init 初始化 reply
  2. res = func(bs, txn, &msg, &reply); 函数指针,指向的是调用 binder_loop 方法传入的 svcmgr_handler(用来注册所有的服务)
  3. binder_send_reply

我们进入这个 svcmgr_handler 函数看下,这个函数会调用到注册的逻辑,执行 do_add_service 函数

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
{
    switch(txn->code) {
        case SVC_MGR_ADD_SERVICE:
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
            return -1;
        }
        handle = bio_get_ref(msg);
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
        // 执行这个函数,进行服务的注册
        if (do_add_service(bs, s, len, handle, txn->sender_euid,
            allow_isolated, txn->sender_pid))
            return -1;
        break;
    }
}

我们进入这个 do_add_service 方法看下:

int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    // 先查找服务
    si = find_svc(s, len);
    if (si) {
        if (si->handle) {
            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
                 str8(s, len), handle, uid);
            svcinfo_death(bs, si);
        }
        si->handle = handle;
    } else {
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
                 str8(s, len), handle, uid);
            return -1;
        }
        si->handle = handle;
        si->len = len;
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        si->next = svclist;
        // 将服务保存到 svclist 中
        svclist = si;
    }
}

所以说 ServiceManager 使用的是 svclist 来保存所有服务的;

我们接着拐回去看 binder_send_reply 函数,看下都发送了哪些命令;

void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       binder_uintptr_t buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
    data.cmd_reply = BC_REPLY;
    data.txn.target.ptr = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
        data.txn.flags = TF_STATUS_CODE;
        data.txn.data_size = sizeof(int);
        data.txn.offsets_size = 0;
        data.txn.data.ptr.buffer = (uintptr_t)&status;
        data.txn.data.ptr.offsets = 0;
    } else {
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
        data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
    }
    binder_write(bs, &data, sizeof(data));
}

发送 BC_REPLY 给 binder 驱动,并调用 binder_write 让驱动写,就会调用到驱动层的 binder_thread_write 函数,执行 BC_REPLY;

case BC_REPLY: {
	struct binder_transaction_data tr;

	if (copy_from_user(&tr, ptr, sizeof(tr)))
		return -EFAULT;
	ptr += sizeof(tr);
	binder_transaction(proc, thread, &tr,
		cmd == BC_REPLY, 0);
	break;
}

就会执行 binder_transaction 逻辑,此时 cmd 的值就是 BC_REPLY; 这个方法前面已经分析了,我们可以直接看 reply 的逻辑;

t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);

执行到这里,就会把 reply 的逻辑返回给 client,client 就会被唤醒;

我们接下来看下 ServiceManager 如何处理 transcat

这里我们直接到 IPCThreadState 的 executeCommand 函数看下:

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    case BR_TRANSACTION:
    {
        if (tr.target.ptr) {
            if (reinterpret_cast<RefBase::weakref_type*>(
                    tr.target.ptr)->attemptIncStrong(this)) {
                error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,&reply, tr.flags);
                reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
            } else {
                error = UNKNOWN_TRANSACTION;
            }

        }
    }
}

就会调用到 BBinder 的 transact 函数,我们进入 BBinder 的这个函数看下;

status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if (reply != NULL) {
        reply->setDataPosition(0);
    }

    return err;
}

接着就会调用 onTranscat 函数,这个 onTranscat 调用的是 android_util_Binder.cpp 中的 JavaBBinder 中的 onTranscat 函数

virtual status_t onTransact(
        uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0)
    {
         jboolean res = env->CallBooleanMethod(mObject, gBinderOffsets.mExecTransact,
            code, reinterpret_cast<jlong>(&data), reinterpret_cast<jlong>(reply), flags);
    }

这个最终调用的是 Binder.java 的 executeTransac 方法;

infoflow 2024-03-27 19-06-42.png

整体就是这样的一个流程图;

到此,完整的进程对服务的注册与获取就讲解完了;

下一章预告


Dalvik VM进程系统浅分析;

欢迎三连


都看到最后了,点个关注点个赞吧,你的支持是我最大的动力~~~

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1549069.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

三、阅读器开发--4、阅读器目录、全文搜索功能开发

1、阅读器目录 1.1、实现目录 先实现目录的布局 定义一个蒙版&#xff0c;充满整个屏幕浮在阅读器上方&#xff0c;左侧为目录右侧为背景&#xff0c;目录下方包含一个tab&#xff0c;点击后会切换不同的内容&#xff0c;这里tab是目录、书签&#xff0c;这里可以通过如下的…

(原型与原型链)前端八股文修炼Day5

一 原型链的理解 原型链定义&#xff1a; 原型链是 JavaScript 中实现对象继承的关键机制之一&#xff0c;它是一种对象之间的关系&#xff0c;通过这种关系&#xff0c;一个对象可以继承另一个对象的属性和方法。 原型链的组成&#xff1a; 每个对象都有一个指向另一个对象的…

【node】express使用(三)

1、express.static快速托管静态资源 express:快速、开放、极简的Web开发框架。(npm第三方包&#xff0c;提供快速创建web服务器便捷方法) Express中文官网 (1) express快速创建web网站服务器以及api接口服务器 // 1、导入express const express require(express) // 2、创…

ITES | 深圳工业展正运动重磅产品即将亮相

■展会名称&#xff1a; 第二十五届深圳国际工业制造技术及设备展览会&#xff08;以下简称“深圳工业展”&#xff09; ■展会日期 2024年3月28日-31日 ■展馆地点 中国深圳国际会展中心(宝安) ■展位号 9号馆F04 2024年深圳工业展(ITES)将于3月28日至31日在深圳宝安国…

Android Studio详细安装教程及入门测试

Android Studio 是 Android 开发人员必不可少的工具。 它可以帮助开发者快速、高效地开发高质量的 Android 应用。 这里写目录标题 一、Android Studio1.1 Android Studio主要功能1.2 Android应用 二、Android Studio下载三、Android Studio安装四、SDK工具包下载五、新建测试…

20240320-1-梯度下降

梯度下降法面试题 1. 机器学习中为什么需要梯度下降 梯度下降的作用&#xff1a; 梯度下降是迭代法的一种&#xff0c;可以用于求解最小二乘问题。在求解损失函数的最小值时&#xff0c;可以通过梯度下降法来一步步的迭代求解&#xff0c;得到最小化的损失函数和模型参数值。…

阶乘的最高位

阶乘的最高位 题目描述 输入一个正整数n。输出n!的最高位上的数字。 输入 输入一个正整数n&#xff08;n不超过1000&#xff09;。 输出 输出n!的最高位上的数字。 样例输入 1000样例输出 4解 这道题要是求阶乘的后三位或者后几位&#xff0c;大家肯定都会。 求最高…

web服务应用术语

一、HTTP 协议详解 TCP 协议与 HTTP 协议 TCP 协议主要用于数据传输控制&#xff0c;而 HTTP 协议主要用于应用层面的数据交互。 HTTP 属于应用层协议&#xff0c;是建立在 TCP 协议基础之上的&#xff0c;HTTP 协议以客户端请求和服务器端响应为标准&#xff0c;浏览器通常称…

JavaSE+JDBC进行控制台输出的客户管理系统! (实训/课堂实践推荐)

本人博客&#xff1a;玖玖的个人博客 (zhangxi.online)&#xff0c;欢迎大家来踩 该文章原地址&#xff1a; JavaSEJDBC进行控制台输出的客户管理系统! (实训/课堂实践推荐&#xff09; (zhangxi.online) 本人诚挚的特别感谢&#xff1a;尚硅谷/黑马程序员提供的学习案例 项…

玩转云计算:教你在Akamai Linode上构建IT架构–定义项目

时至今日&#xff0c;选择以云计算方式来运维业务&#xff0c;已经成为大部分情况下的最优选。那么如果要从零开始开发一个新应用&#xff0c;并依托云平台来设计、开发、部害和远维&#xff0c;具体该从何处下手&#xff1f;这一系列文章将介绍如何基于Akamai Linode平台实现这…

通过Appium和Xcode Accessibility Inspector获取iOS应用元素定位的方法

在 iOS 移动应用程序上使用选择器查找元素定位是我们在移动端 UI 自动化测试的先决条件。 但是&#xff0c;由于应用程序内容在原生 iOS 应用程序中的呈现方式&#xff0c;我们可以用来定位应用程序元素的选择器与 Web 浏览器元素有很大不同。 在本文中&#xff0c;我们将了解 …

将markdown文档中的图床外链图片下载到本地文件夹

markdown图床外链图片下载到本地代码 前言 因为文章发到先知或者攻防社区需要本地图片&#xff0c;而我的图片从来都是上传到图床&#xff0c;所以编写了一个脚本实现了把markdown文章中所有含有外链图床的图片转储到本地的文件夹。 然后发布文章时再手动一个个上传图片。 详细…

STM32的IAP计数,BootLoader

来源 三种下载方式&#xff1a; 1、ICP&#xff1a;ST-Link, 2、ISP: FlyMcu, 3、IAP IAP简介 IAP技术的核心在于BootLoader程序的设计&#xff0c;这段程序预先烧录在单片机中&#xff0c;正常的APP程序可以使用BootLoader程序中的IAP功能写入&#xff0c;也可以两部分代码一…

comfyui 代码结构分析

comfyui的服务器端是用aiohtttp写的&#xff0c;webui是fastapi直接构建的&#xff0c;但是其实comfyui的这种设计思路是很好的&#xff0c;也许我们不需要在后端起一个复杂的前台&#xff0c;但是可以借助json结构化pipeline&#xff0c;然后利用node节点流把整个流程重新映射…

部署云原生边缘计算平台kubeedge

文章目录 1、kubeedge架构2、基础服务提供 负载均衡器 metallb2.1、开启ipvc模式中的strictARP2.2、部署metalb2.2.1、创建IP地址池2.2.2、开启二层转发&#xff0c;实现在k8s集群节点外访问2.2.3、测试 3、部署cloudcore3.1、部署cloudcore3.2、修改cloudcore的网络类型 4、部…

【ORB-SLAM3】在 Ubuntu20.04 上编译 ORM-SLAM3 并使用 D435i、EuRoC 和 TUM-VI 运行测试

【ORB-SLAM3】在 Ubuntu20.04 上编译 ORM-SLAM3 并使用 D435i、EuRoC 和 TUM-VI 运行测试 1 Prerequisites1.1 C11 or C0x Compiler1.2 Pangolin1.3 OpenCV1.4 Eigen3 2 安装 Intel RealSense™ SDK 2.02.1 测试设备2.2 编译源码安装 (Recommend)2.3 预编译包安装 3 编译 ORB-S…

cesium 创建实体

1、 entity 1.1 entity类型整理 Entity分类 1.2 entity添加 椭圆 const ellipse new Cesium.Entity({position: Cesium.Cartesian3.fromDegrees(114.3, 39.9, 100),ellipse: {semiMinorAxis: 30000, //椭圆的短半轴semiMajorAxis: 40000, //椭圆的长半轴extrudedHeight: 0…

FPGA工程师及其相关岗位招聘~

社区的招聘功能上线之后&#xff0c;许多企业都在上面发布了招聘岗位。 目前有30企业&#xff0c;岗位围绕FPGA工程师&#xff0c;涵盖嵌入式软件工程师、射频工程师、C语言开发、BMC工程师等等&#xff0c;入口放在这里&#xff1a;F学社-全球FPGA技术提升平台 登录账号后&a…

Swift知识点(二)

6. 闭包表达式与闭包 闭包表达式&#xff08;Closure Expression&#xff09; 闭包表达式是一种在简短行内就能写完闭包的语法 也就是&#xff0c;闭包表达式&#xff0c;只是一种简洁、快速实现闭包的语法 Swift 的闭包表达式拥有简洁的风格&#xff0c;鼓励在常见场景中实现…

VLAN的原理及配置

文章目录 一、VLAN的概述1、VLAN的概念2、VLAN的优势 二、静态VLAN三、静态VLAN的配置1.VLAN的范围2.VLAN基本配置 四、Trunk和access的作用参考 一、VLAN的概述 1、VLAN的概念 VLAN就是将网络从逻辑上划分为若按个小的网络&#xff0c;也就是虚拟局域网。 2、VLAN的优势 使…