《斯坦福数据挖掘教程·第三版》读书笔记(英文版)Chapter 11 Dimensionality Reduction

news2024/11/18 23:32:15

来源:《斯坦福数据挖掘教程·第三版》对应的公开英文书和PPT

Chapter 11 Dimensionality Reduction

Let M be a square matrix. Let λ be a constant and e a nonzero column vector with the same number of rows as M. Then λ is an eigenvalue of M and e is the corresponding eigenvector of M if M e = λ e Me = λe Me=λe.

Start with any unit vector v of the appropriate length and compute M i v M^iv Miv iteratively until it converges. When M is a stochastic matrix, the limiting vector is the principal eigenvector (the eigenvector with the largest eigenvalue), and its corresponding eigenvalue is 1. This method for finding the principal eigenvector, called power iteration, works quite generally, although if the principal eigenvalue (eigenvalue associated with the principal eigenvector) is not 1, then as i grows, the ratio of M i + 1 v M^{i+1}v Mi+1v to M i v M^iv Miv approaches the principal eigenvalue while M i v M^iv Miv approaches
a vector (probably not a unit vector) with the same direction as the principal eigenvector.

To find the second eigenpair we create a new matrix M ∗ = M − λ 1 x x T M^∗ = M − λ_1xx^T M=Mλ1xxT. Then, use power iteration on M ∗ M^∗ M to compute its largest eigenvalue. The obtained x ∗ x^∗ x and λ ∗ λ^∗ λ correspond to the second largest eigenvalue and the corresponding eigenvector of matrix M. Intuitively, what we have done is eliminate the influence of a given eigenvector by setting its associated eigenvalue to zero. The formal justification is the following two observations. If M ∗ = M − λ x x T M^∗ = M − λxx^T M=MλxxT, where x and λ are the eigenpair with the largest eigenvalue, then:

  1. x is also an eigenvector of M ∗ M^∗ M, and its corresponding eigenvalue is 0. In proof, observe that

    M ∗ x = ( M − λ x x T ) x = M x − λ x x T x = M x − λ x = 0 M^∗x = (M − λxx^T)x = Mx − λxx^Tx = Mx − λx = 0 Mx=(MλxxT)x=MxλxxTx=Mxλx=0

    At the next-to-last step we use the fact that x T x = 1 x^Tx = 1 xTx=1 because x is a unit vector.

  2. Conversely, if v and λ v λ_v λv are an eigenpair of a symmetric matrix M other than the first eigenpair (x, λ), then they are also an eigenpair of M ∗ M^∗ M.
    Proof :

    M ∗ v = ( M ∗ ) T v = ( M − λ x x T ) T v = M T v − λ x ( x T v ) = M T v = λ v v M^∗v = (M^∗)^Tv = (M − λxx^T)^Tv = M^Tv − λx(x^Tv) = M^Tv = λ_vv Mv=(M)Tv=(MλxxT)Tv=MTvλx(xTv)=MTv=λvv

    This sequence of equalities needs the following justifications:
    (a) If M is symmetric, then M = M T M = M^T M=MT.
    (b) The eigenvectors of a symmetric matrix are orthogonal. That is, the dot product of any two distinct eigenvectors of a matrix is 0. We do not prove this statement here.

Principal-component analysis, or PCA, is a technique for taking a dataset consisting of a set of tuples representing points in a high-dimensional space and finding the directions along which the tuples line up best. The idea is to treat the set of tuples as a matrix M and find the eigenvectors for M M T MM^T MMT or M T M M^TM MTM. The matrix of these eigenvectors can be thought of as a rigid rotation in a high dimensional space. When you apply this transformation to the original data, the axis corresponding to the principal eigenvector is the one along which the points are most “spread out,” More precisely, this axis is the one along which the variance of the data is maximized. Put another way, the points can best be viewed as lying along this axis, with small deviations from this axis. Likewise, the axis corresponding to the second eigenvector (the eigenvector corresponding to the second-largest eigenvalue) is the axis along which the variance of distances from the first axis is greatest, and so on.

Any matrix of orthonormal vectors (unit vectors that are orthogonal to one another) represents a rotation and/or reflection of the axes of a Euclidean space.

We conclude that the eigenvalues of M M T MM^T MMT are the eigenvalues of M T M M^TM MTM plus additional 0’s. If the dimension of M M T MM^T MMT were less than the dimension off M T M M^TM MTM, then the opposite would be true; the eigenvalues of M T M M^TM MTM would be those of M M T MM^T MMT plus additional 0’s.

Let M be an m × n m × n m×n matrix, and let the rank of M be r. Recall that the rank of a matrix is the largest number of rows (or equivalently columns) we can choose for which no nonzero linear combination of the rows is the all-zero vector 0 (we say a set of such rows or columns is independent). Then we can find matrices U, Σ, and V as shown in Fig. 11.5 with the following properties:

  1. U is an m × r m × r m×r column-orthonormal matrix; that is, each of its columns is a unit vector and the dot product of any two columns is 0.
  2. V is an n × r n × r n×r column-orthonormal matrix. Note that we always use V in its transposed form, so it is the rows of V T V^T VT that are orthonormal.
  3. Σ is a diagonal matrix; that is, all elements not on the main diagonal are 0. The elements of Σ are called the singular values of M.

在这里插入图片描述

Suppose we want to represent a very large matrix M by its SVD components U, Σ, and V , but these matrices are also too large to store conveniently. The best way to reduce the dimensionality of the three matrices is to set the smallest of the singular values to zero. If we set the s smallest singular values to 0, then we can also eliminate the corresponding s columns of U and V.

How Many Singular Values Should We Retain?

A useful rule of thumb is to retain enough singular values to make up 90% of the energy in Σ. That is, the sum of the squares of the retained singular values should be at least 90% of the sum of the squares of all the singular values.

The choice of the lowest singular values to drop when we reduce the number of dimensions can be shown to minimize the root-mean-square error between the original matrix M and its approximation.

It says that V is the matrix of eigenvectors of M T M M^TM MTM and Σ 2 Σ^2 Σ2 is the diagonal matrix whose entries are the corresponding eigenvalues.

Thus, the same algorithm that computes the eigenpairs for M T M M^TM MTM gives us the matrix V for the SVD of M itself. It also gives us the singular values for this SVD; just take the square roots of the eigenvalues for M T M M^TM MTM. U is the matrix of eigenvectors of M M T MM^T MMT.

Definition of CUR

Let M be a matrix of m rows and n columns. Pick a target number of “concepts” r to be used in the decomposition. A CUR-decomposition of M is a randomly chosen set of r columns of M, which form the m × r m × r m×r matrix C, and a randomly chosen set of r rows of M, which form the r × n r × n r×n matrix R. There is also an r × r r × r r×r matrix U that is constructed from C and R as follows:

  1. Let W be the r × r r × r r×r matrix that is the intersection of the chosen columns of C and the chosen rows of R. That is, the element in row i and column j of W is the element of M whose column is the jth column of C and whose row is the ith row of R.
  2. Compute the SVD of W; say W = X Σ Y T W = XΣY^T W=XΣYT.
  3. Compute Σ + Σ^+ Σ+, the Moore-Penrose pseudoinverse of the diagonal matrix Σ. That is, if the ith diagonal element of Σ is σ ≠ 0 σ \ne 0 σ=0, then replace it by 1/σ. But if the ith element is 0, leave it as 0.
  4. Let U = Y ( Σ + ) 2 X T U = Y (Σ^+)^2X^T U=Y(Σ+)2XT.

Having selected each of the columns of M, we scale each column by dividing its elements by the square root of the expected number of times this column would be picked. That is, we divide the elements of the jth column of M, if it is selected, by r q j \sqrt {rq_j} rqj . The scaled column of M becomes a column of C.
Rows of M are selected for R in the analogous way. For each row of R we select from the rows of M, choosing row i with probability p i p_i pi. Recall p i p_i pi is the sum of the squares of the elements of the ith row divided by the sum of the squares of all the elements of M. We then scale each chosen row by dividing by r p i \sqrt {rp_i} rpi if it is the ith row of M that was chosen.

It is quite possible that a single row or column is selected more than once. However, it is also possible to combine k rows of R that are each the same row of the matrix M into a single row of R, thus leaving R with fewer rows. Likewise, k columns of C that each come from the same column of M can be combined into one column of C. However, for either rows or columns,
the remaining vector should have each of its elements multiplied by k \sqrt k k .
When we merge some rows and/or columns, it is possible that R has fewer rows than C has columns, or vice versa. As a consequence, W will not be a square matrix. However, we can still take its pseudoinverse by decomposing it into W = X Σ Y T W = XΣY^T W=XΣYT, where Σ is now a diagonal matrix with some all-0 rows or columns, whichever it has more of. To take the pseudoinverse of such a diagonal matrix, we treat each element on the diagonal as usual (invert nonzero elements
and leave 0 as it is), but then we must transpose the result.

Summary of Chapter 11

  • Dimensionality Reduction: The goal of dimensionality reduction is to replace a large matrix by two or more other matrices whose sizes are much smaller than the original, but from which the original can be approximately reconstructed, usually by taking their product.
  • Eigenvalues and Eigenvectors: A matrix may have several eigenvectors such that when the matrix multiplies the eigenvector, the result is a constant multiple of the eigenvector. That constant is the eigenvalue associated with this eigenvector. Together the eigenvector and its eigenvalue are called an eigenpair.
  • Finding Eigenpairs by Power Iteration: We can find the principal eigenvector (eigenvector with the largest eigenvalue) by starting with any vector and repeatedly multiplying the current vector by the matrix to get a new vector. When the changes to the vector become small, we can treat the result as a close approximation to the principal eigenvector. By modifying the matrix, we can then use the same iteration to get the second eigenpair (that with the second-largest eigenvalue), and similarly get each of the eigenpairs in turn, in order of decreasing value of the eigenvalue.
  • Principal-Component Analysis: This technique for dimensionality reduction views data consisting of a collection of points in a multidimensional space as a matrix, with rows corresponding to the points and columns to the dimensions. The product of this matrix and its transpose has eigenpairs, and the principal eigenvector can be viewed as the direction in the space along which the points best line up. The second eigenvector represents the direction in which deviations from the principal eigenvector are the greatest, and so on.
  • Dimensionality Reduction by PCA: By representing the matrix of points by a small number of its eigenvectors, we can approximate the data in a way that minimizes the root-mean-square error for the given number of columns in the representing matrix.
  • Singular-Value Decomposition: The singular-value decomposition of a matrix consists of three matrices, U, Σ, and V . The matrices U and V are column-orthonormal, meaning that as vectors, the columns are orthogonal, and their lengths are 1. The matrix Σ is a diagonal matrix, and the values along its diagonal are called singular values. The product of U, Σ, and the transpose of V equals the original matrix.
  • Concepts: SVD is useful when there are a small number of concepts that connect the rows and columns of the original matrix. For example, if the original matrix represents the ratings given by movie viewers (rows) to movies (columns), the concepts might be the genres of the movies. The matrix U connects rows to concepts, Σ represents the strengths of the concepts, and V connects the concepts to columns.
  • Queries Using the Singular-Value Decomposition: We can use the decomposition to relate new or hypothetical rows of the original matrix to the concepts represented by the decomposition. Multiply a row by the matrix V of the decomposition to get a vector indicating the extent to which that row matches each of the concepts.
  • Using SVD for Dimensionality Reduction: In a complete SVD for a matrix, U and V are typically as large as the original. To use fewer columns for U and V , delete the columns corresponding to the smallest singular values from U, V , and Σ. This choice minimizes the error in reconstructing the original matrix from the modified U, Σ, and V .
  • Decomposing Sparse Matrices: Even in the common case where the given matrix is sparse, the matrices constructed by SVD are dense. The CUR decomposition seeks to decompose a sparse matrix into sparse, smaller matrices whose product approximates the original matrix.
  • CUR Decomposition: This method chooses from a given sparse matrix a set of columns C and a set of rows R, which play the role of U and V T V^T VT in SVD; the user can pick any number of rows and columns. The choice of rows and columns is made randomly with a distribution that depends on the Frobenius norm, or the square root of the sum of the
    squares of the elements. Between C and R is a square matrix called U that is constructed by a pseudo-inverse of the intersection of the chosen rows and columns.

END

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/604171.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

快手三面全过了,却因为背调时leader手机号造假,导致offer作废了!

这是一个悲伤的故事: 快手本地三面全过了,但因为背调时leader手机号造假,导致offer作废了。 楼主感叹:大家背调填写信息时,一定要慎重再慎重,不要重复他的悲剧! 网友愤慨,照这么说&a…

【Nginx 优化与防盗链】

目录 一、Nginx 服务优化1、配置Nginx 隐藏版本号2、修改用户与组3、缓存时间4、日志切割小知识 二、Nginx 深入优化1、连接超时2、更改进程数3、配置网页压缩4、配置防盗链 一、Nginx 服务优化 1、配置Nginx 隐藏版本号 可以使用 Fiddler 工具抓取数据包,查看 Ng…

UniApp全局弹窗

一、设计思路 1、创建一个弹窗页面组件 2、配置page.json,使页面跳转是在当前界面展示 3、定义uni全局全局属性 4、解决多个弹窗同时使用的冲突问题 注意:此方案不支持多个弹窗并存,有且仅有一个会展示,当前弹窗展示并关闭上一个弹…

连锁药店系统:如何提高效率和客户满意度?

连锁药店系统是一种用于提高效率和客户满意度的软件系统,它能够管理多个药店的日常营运。通过这种系统,药店可以更好地管理库存、员工、销售和客户信息等方面,从而提高整体的经营效率。 首先,连锁药店系统能够帮助药店管理库存。系…

黑马Redis视频教程实战篇(六)

目录 一、附近商户 1.1、GEO数据结构的基本用法 1.2、导入店铺数据到GEO 1.3、实现附近商户功能 二、用户签到 2.1、BitMap功能演示 2.2、实现签到功能 2.3、签到统计 2.4、关于使用bitmap来解决缓存穿透的方案 三、UV统计 3.1、HyperLogLog 3.2、测试百万数据的统…

I.MX6ull 中断 二 (按键驱动蜂鸣器)

按键中断 KEY0 (UART1_CTS 引脚)触发蜂鸣器 1 修改start.S 添加中断相关定义 中断向量表 .global _start /* 全局标号 *//** 描述: _start函数,首先是中断向量表的创建* 参考文档:ARM Cortex-A(armV7)编程手册V4.0.pdf P…

【花雕学AI】ChatGPT帮我快速优化标题:古老的非洲部落,有一种神奇的超音速烫脚舞

关于非洲烫脚舞,直接看看ChatGPT的许多创意,一般人确实想不到: 部落文化的声动震波 非洲之歌:部落的音速节奏 非洲土著的音速脚掌传奇 古老部落的震人心魂之舞 非洲红土之声:脚掌舞的激情 非洲神秘部落的超音速脚掌舞 仙踪般的部落音乐…

chatgpt赋能python:Python绘制函数曲线:创造出令人惊叹的图形

Python绘制函数曲线:创造出令人惊叹的图形 随着越来越多的人开始关注数据可视化,Python成为了一种被广泛使用的工具,用于创建各种图形,包括函数曲线。Python图形库的灵活性和适用性使得它成为数据科学和工程领域中最受欢迎的编程…

如何用 ChatGPT 一句话生成 Web 应用?

原型系统的开发对很多不会编程的人来说,原本确实是一道门槛,而且看似难以逾越。而现在,障碍突然间就消失了。 插件 ChatGPT 现在有了一个内容比较丰富的插件系统,而且 Plus 用户已经不再需要填表申请后漫长等待,直接就…

英雄算法联盟 | 六月算法集训顺利开始

文章目录 前言一、集训规划二、星友的反馈1、有觉得题目简单重新找回了自信的2、有拿到题不管三七二十一疯狂输出的3、有为了完成当天作业奋斗到凌晨的4、有自己悟出了坚持就是胜利的道理的5、有发现身边人都在跑而跃跃欲试的6、有上班摸鱼刷题只因为了赶进度的7、有看到大家都…

【微信小程序开发】第 2 节 - 注册小程序开发账号

欢迎来到博主 Apeiron 的博客,祝您旅程愉快 ! 时止则止,时行则行。动静不失其时,其道光明。 目录 1、缘起 2、注册小程序开发账号 3、总结 1、缘起 开发微信小程序从大的方面来说主要分为三步: ① 注册小程序开发…

【观察】星环科技:布局行业大模型赛道,加速国产化替代进程

以ChatGPT和GPT所代表的大模型,已经在国内形成了“海啸效应”,几乎所有的科技公司都在想方设法进入大模型的赛道。背后的核心驱动力,就在于大模型的最大价值在于普遍提升个人生产力,而各行各业的公司都在积极寻找应用大模型和生成…

黑客使用哪些编程语言

黑客使用哪些编程语言? 使用 Python 分析漏洞利用数据库 克里斯蒂安科赫 迈向数据科学 2021 年,我们与科学家同行一起在德国混沌计算机俱乐部 (CCC) 进行了一项调查。我们的目标是找出黑客最常使用的编程语言。 本文跟进调查&…

M F C(七)对话框

概念 与用户进行交互的窗口,它的顶级父类为CWND,对话框上面可以有各种控件,控件也是继承自CWND 基本控件功能对应的类静态文本框显示文本,一般不能接收输入信息CStatic图像控件显示图标、方框、和图元文件CStatic编辑器编辑正文…

公网SSH远程连接Termux – 电脑使用安卓Termux 「无需公网IP」

文章目录 1.安装ssh2.安装cpolar内网穿透3.远程ssh连接配置4.公网远程连接5.固定远程连接地址 转载自cpolar极点云的文章:公网SSH远程连接Termux – 电脑使用安卓Termux 「无需公网IP」 使用安卓机跑东西的时候,屏幕太小,有时候操作不习惯。不…

【Linux】crontab 定时任务

当你需要在Linux系统中定期执行某些任务时,crontab(cron table)是一个非常有用的工具。它允许你根据预定的时间表创建和管理定时任务。 一、从守护进程到crond进程1.1 Linux 守护进程1.2 任务调度进程crond 二、 crontab 详细介绍2.1 crontab…

AI狂飙突进,存力需作先锋

5月30日,在2023中关村论坛成果发布会上,《北京市加快建设具有全球影响力的人工智能创新策源地实施方案(2023-2025年)》正式发布。《实施方案》要求,支持创新主体重点突破分布式高效深度学习框架、大模型新型基础架构等…

chatgpt赋能python:Python列表数据相加的完全指南

Python列表数据相加的完全指南 Python中的列表是一种非常方便的数据结构,它允许我们存储和处理一组数据。在这篇文章中,我们将介绍如何在Python中使用列表来进行数据相加的操作,并提供一些实用的技巧和建议。如果你正在寻找Python中关于列表…

linux安装docker并设置国内镜像仓库

前置条件 该方案为centos上安装docker,其他版本linux请参照官方文档:https://docs.docker.com/engine/install/centos/该linux系统没有安装过docker,或者已卸载docker #卸载docker yum remove docker \docker-client \docker-client-latest…

springboot整合kafka入门

kafka基本概念 producer: 生产者,负责发布消息到kafka cluster(kafka集群)中。生产者可以是web前端产生的page view,或者是服务器日志,系统CPU、memory等。 consumer: 消费者,每个consumer属于一个特定的c…