论文原文:3504035.3504050 (acm.org)
英文是纯手打的!论文原文的summarizing and paraphrasing。可能会出现难以避免的拼写错误和语法错误,若有发现欢迎评论指正!文章偏向于笔记,谨慎食用
目录
1. 省流版
1.1. 心得
1.2. 论文总结图
2. 论文逐段精读
2.1. Abstract
2.2. Introduction
2.3. Related work
2.4. Preliminaries
2.5. Methodology
2.5.1. Problem definition
2.5.2. M2E approach
2.5.3. Optimization framework
2.6. Experiments and evaluation
2.6.1. Data collection and preprocessing
2.6.2. Baselines and metrics
2.6.3. Clustering results
2.6.4. Parameter sensitivity analysis
2.6.5. Factor analysis
2.7. Conclusion
3. 知识补充
3.1. 偏对称张量
4. Reference
1. 省流版
1.1. 心得
(1)这个好像不是深度学习捏~
1.2. 论文总结图
2. 论文逐段精读
2.1. Abstract
①They proposed a Multi-view Multigraph Embedding (M2E) to get information from different views
2.2. Introduction
①The conceptual view of M2E:
2.3. Related work
①Introducing graph embedding methods
②Compared with multi-view clustering and multi-view embedding
2.4. Preliminaries
①Notations:
②Definition 1: introducing partial symmetric tensor(不过我觉得作者没有解释地很清楚,他说“如果一个M阶张量在模态1到M上偏对称,那么它就是秩一偏对称张量”。不如看看我的知识补充)
③Definition 2: matricize tensor to , where
④Definition 3: factorize to:
which needs to minimize the estimation error:
and, to solve non convex optimization problems:
where
2.5. Methodology
2.5.1. Problem definition
①For samples with views, they have brain connectivity each with nodes
②For each view, the whole graph set is
③All the views:
④To learn an embedding for each participant
2.5.2. M2E approach
①Concatenated third-order tensor:
②Embedding function:
where and calculated by CP factorization:
③Common embedding learning:
④Combining them to optimize M2E:
where the first term is for minimize the dependence of multi-graphs and the second is for multi-views
2.5.3. Optimization framework
①Parameter needs estimate: , , and . Due to they are not convex, no closed-form adopted. Then they introduced an iteration method, Alternating Direction Method of Multipliers (ADMM) approach.
②They use variable substitution technique, fixing and , compute :
the Lagragian function:
where denotes Lagrange multipliers, denotes penalty parameter. Optimization problem:
they transfer to , and define
. Further changing the minimizing function:
where and . Solving it by update :
where denotes Lipschitz coefficient and equals to the maximum eigenvalue of . They applied Khatri-Rao product to calculate :
where denotes Hadamard product. The updating function of :
where , , . Lastly update :
③Then they fix and to compute by minimize:
where . The updating function of :
where ,
④Finally, they fix and to minimize over :
⑤Overall time complexity:
2.6. Experiments and evaluation
2.6.1. Data collection and preprocessing
(1)Human Immunodeficiency Virus Infection (HIV)
①Sample: randomly select 35 patients and 35 controls from dataset due to the data imbalance
②Atlas: AAL 90
(2)Bipolar Disorder (BP)
①Sample: 52 BP and 45 controls
②Atlas: self-generated 82 regions
euthymia n. 情感正常
2.6.2. Baselines and metrics
①Introducing compared models
②Grid search for hyper-parameters: , form
2.6.3. Clustering results
①Performance comparison table:
2.6.4. Parameter sensitivity analysis
①Ablation on :
②Ablation on :
2.6.5. Factor analysis
①The activity intensity of the brain region and the embedded feature :
2.7. Conclusion
They design a novel multi-view multi-graph embedding framework based on partially-symmetric tensor factorization
3. 知识补充
3.1. 偏对称张量
(1)定义:偏对称张量是指张量中的某些分量在特定的下标重排后,其值保持不变。这种性质与张量的对称性有关,但与完全对称的张量(即所有下标重排后元素都相等的张量)不同,偏对称张量只要求部分下标重排后元素相等。
(2)示例:以三阶张量为例,如果满足以下条件之一或多个,则可以称为偏对称张量:
①(第一个和第二个下标互换)
②(第一个和第三个下标互换)
③(同时满足前两个条件)
4. Reference
Liu, Y. et al. (2018) 'Multi-View Multi-Graph Embedding for Brain Network Clustering Analysis', AAAI. doi: https://doi.org/10.48550/arXiv.1806.07703