跟TED演讲学英文:AI isn‘t as smart as you think -- but it could be by Jeff Dean

news2024/12/24 2:27:20

AI isn’t as smart as you think – but it could be

在这里插入图片描述

Link: https://www.ted.com/talks/jeff_dean_ai_isn_t_as_smart_as_you_think_but_it_could_be

Speaker: Jeff Dean

Jeffrey Adgate “Jeff” Dean (born July 23, 1968) is an American computer scientist and software engineer. Since 2018, he has been the lead of Google AI.[1] He was appointed Alphabet’s chief scientist in 2023 after a reorganization of Alphabet’s AI focused groups.[2]

Date: August 2021

文章目录

  • AI isn't as smart as you think -- but it could be
    • Introduction
    • Vocabulary
    • Transcript
    • Q&A with Chris Anderson
    • Summary
    • 后记

Introduction

What is AI, really? Jeff Dean, the head of Google’s AI efforts, explains the underlying technology that enables artificial intelligence to do all sorts of things, from understanding language to diagnosing disease – and presents a roadmap for building better, more responsible systems that have a deeper understanding of the world. (Followed by a Q&A with head of TED Chris Anderson)

Vocabulary

wedge into:挤进

when we were all wedged into a tiny office space 当我们都挤在一个狭小的办公室里

emulate:美 [ˈemjuleɪt] 模仿,仿效

a series of interconnected artificial neurons that loosely emulate the properties of your real neurons. 一系列相互连接的人造神经元,松散地模拟了真实神经元的特性。

tailored to:合身的,适合的,裁剪的

we also got excited about how we could build hardware that was better tailored to the kinds of computations neural networks wanted to do. 我们还对如何构建更适合神经网络想要进行的计算类型的硬件感到兴奋。

hone:磨砺,磨练

hydroponic:美 [ˌhaɪdrə’pɒnɪk] 水耕法的, 水栽法的

I’ve been honing my gardening skills, experimenting with vertical hydroponic gardening. 我一直在磨练我的园艺技能,尝试垂直水培园艺。

Transcript

Hi, I’m Jeff.

I lead AI Research and Health at Google.

I joined Google more than 20 years ago,

when we were all wedged
into a tiny office space,

above what’s now a T-Mobile store
in downtown Palo Alto.

I’ve seen a lot of computing
transformations in that time,

and in the last decade, we’ve seen AI
be able to do tremendous things.

But we’re still doing it
all wrong in many ways.

That’s what I want
to talk to you about today.

But first, let’s talk
about what AI can do.

So in the last decade,
we’ve seen tremendous progress

in how AI can help computers see,
understand language,

understand speech better than ever before.

Things that we couldn’t do
before, now we can do.

If you think about computer vision alone,

just in the last 10 years,

computers have effectively
developed the ability to see;

10 years ago, they couldn’t see,
now they can see.

You can imagine this has had
a transformative effect

on what we can do with computers.

So let’s look at a couple
of the great applications

enabled by these capabilities.

We can better predict flooding,
keep everyone safe,

using machine learning.

We can translate over 100 languages
so we all can communicate better,

and better predict and diagnose disease,

where everyone gets
the treatment that they need.

So let’s look at two key components

that underlie the progress
in AI systems today.

The first is neural networks,

a breakthrough approach to solving
some of these difficult problems

that has really shone
in the last 15 years.

But they’re not a new idea.

And the second is computational power.

It actually takes a lot
of computational power

to make neural networks
able to really sing,

and in the last 15 years,
we’ve been able to have that,

and that’s partly what’s enabled
all this progress.

But at the same time,
I think we’re doing several things wrong,

and that’s what I want
to talk to you about

at the end of the talk.

First, a bit of a history lesson.

So for decades,

almost since the very
beginning of computing,

people have wanted
to be able to build computers

that could see, understand language,
understand speech.

The earliest approaches
to this, generally,

people were trying to hand-code
all the algorithms

that you need to accomplish
those difficult tasks,

and it just turned out
to not work very well.

But in the last 15 years,
a single approach

unexpectedly advanced all these different
problem spaces all at once:

neural networks.

So neural networks are not a new idea.

They’re kind of loosely based

on some of the properties
that are in real neural systems.

And many of the ideas
behind neural networks

have been around since the 1960s and 70s.

A neural network is what it sounds like,

a series of interconnected
artificial neurons

that loosely emulate the properties
of your real neurons.

An individual neuron
in one of these systems

has a set of inputs,

each with an associated weight,

and the output of a neuron

is a function of those inputs
multiplied by those weights.

So pretty simple,

and lots and lots of these work together
to learn complicated things.

So how do we actually learn
in a neural network?

It turns out the learning process

consists of repeatedly making
tiny little adjustments

to the weight values,

strengthening the influence
of some things,

weakening the influence of others.

By driving the overall system
towards desired behaviors,

these systems can be trained
to do really complicated things,

like translate
from one language to another,

detect what kind
of objects are in a photo,

all kinds of complicated things.

I first got interested in neural networks

when I took a class on them
as an undergraduate in 1990.

At that time,

neural networks showed
impressive results on tiny problems,

but they really couldn’t scale to do
real-world important tasks.

But I was super excited.

(Laughter)

I felt maybe we just needed
more compute power.

And the University of Minnesota
had a 32-processor machine.

I thought, "With more compute power,

boy, we could really make
neural networks really sing."

So I decided to do a senior thesis
on parallel training of neural networks,

the idea of using processors in a computer
or in a computer system

to all work toward the same task,

that of training neural networks.

32 processors, wow,

we’ve got to be able
to do great things with this.

But I was wrong.

Turns out we needed about a million times
as much computational power

as we had in 1990

before we could actually get
neural networks to do impressive things.

But starting around 2005,

thanks to the computing progress
of Moore’s law,

we actually started to have
that much computing power,

and researchers in a few universities
around the world started to see success

in using neural networks for a wide
variety of different kinds of tasks.

I and a few others at Google
heard about some of these successes,

and we decided to start a project
to train very large neural networks.

One system that we trained,

we trained with 10 million
randomly selected frames

from YouTube videos.

The system developed the capability

to recognize all kinds
of different objects.

And it being YouTube, of course,

it developed the ability
to recognize cats.

YouTube is full of cats.

(Laughter)

But what made that so remarkable

is that the system was never told
what a cat was.

So using just patterns in data,

the system honed in on the concept
of a cat all on its own.

All of this occurred at the beginning
of a decade-long string of successes,

of using neural networks
for a huge variety of tasks,

at Google and elsewhere.

Many of the things you use every day,

things like better speech
recognition for your phone,

improved understanding
of queries and documents

for better search quality,

better understanding of geographic
information to improve maps,

and so on.

Around that time,

we also got excited about how we could
build hardware that was better tailored

to the kinds of computations
neural networks wanted to do.

Neural network computations
have two special properties.

The first is they’re very tolerant
of reduced precision.

Couple of significant digits,
you don’t need six or seven.

And the second is that all the
algorithms are generally composed

of different sequences
of matrix and vector operations.

So if you can build a computer

that is really good at low-precision
matrix and vector operations

but can’t do much else,

that’s going to be great
for neural-network computation,

even though you can’t use it
for a lot of other things.

And if you build such things,
people will find amazing uses for them.

This is the first one we built, TPU v1.

“TPU” stands for Tensor Processing Unit.

These have been used for many years
behind every Google search,

for translation,

in the DeepMind AlphaGo matches,

so Lee Sedol and Ke Jie
maybe didn’t realize,

but they were competing
against racks of TPU cards.

And we’ve built a bunch
of subsequent versions of TPUs

that are even better and more exciting.

But despite all these successes,

I think we’re still doing
many things wrong,

and I’ll tell you about three
key things we’re doing wrong,

and how we’ll fix them.

The first is that most
neural networks today

are trained to do one thing,
and one thing only.

You train it for a particular task
that you might care deeply about,

but it’s a pretty heavyweight activity.

You need to curate a data set,

you need to decide
what network architecture you’ll use

for this problem,

you need to initialize the weights
with random values,

apply lots of computation
to make adjustments to the weights.

And at the end, if you’re lucky,
you end up with a model

that is really good
at that task you care about.

But if you do this over and over,

you end up with thousands
of separate models,

each perhaps very capable,

but separate for all the different
tasks you care about.

But think about how people learn.

In the last year, many of us
have picked up a bunch of new skills.

I’ve been honing my gardening skills,

experimenting with vertical
hydroponic gardening.

To do that, I didn’t need to relearn
everything I already knew about plants.

I was able to know
how to put a plant in a hole,

how to pour water, that plants need sun,

and leverage that
in learning this new skill.

Computers can work
the same way, but they don’t today.

If you train a neural
network from scratch,

it’s effectively like forgetting
your entire education

every time you try to do something new.

That’s crazy, right?

So instead, I think we can
and should be training

multitask models that can do
thousands or millions of different tasks.

Each part of that model would specialize
in different kinds of things.

And then, if we have a model
that can do a thousand things,

and the thousand and first
thing comes along,

we can leverage
the expertise we already have

in the related kinds of things

so that we can more quickly be able
to do this new task,

just like you, if you’re confronted
with some new problem,

you quickly identify
the 17 things you already know

that are going to be helpful
in solving that problem.

Second problem is that most
of our models today

deal with only a single
modality of data –

with images, or text or speech,

but not all of these all at once.

But think about how you
go about the world.

You’re continuously using all your senses

to learn from, react to,

figure out what actions
you want to take in the world.

Makes a lot more sense to do that,

and we can build models in the same way.

We can build models that take in
these different modalities of input data,

text, images, speech,

but then fuse them together,

so that regardless of whether the model
sees the word “leopard,”

sees a video of a leopard
or hears someone say the word “leopard,”

the same response
is triggered inside the model:

the concept of a leopard

can deal with different
kinds of input data,

even nonhuman inputs,
like genetic sequences,

3D clouds of points,
as well as images, text and video.

The third problem
is that today’s models are dense.

There’s a single model,

the model is fully activated
for every task,

for every example
that we want to accomplish,

whether that’s a really simple
or a really complicated thing.

This, too, is unlike
how our own brains work.

Different parts of our brains
are good at different things,

and we’re continuously calling
upon the pieces of them

that are relevant for the task at hand.

For example, nervously watching
a garbage truck

back up towards your car,

the part of your brain that thinks
about Shakespearean sonnets

is probably inactive.

(Laughter)

AI models can work the same way.

Instead of a dense model,

we can have one
that is sparsely activated.

So for particular different tasks,
we call upon different parts of the model.

During training, the model can also learn
which parts are good at which things,

to continuously identify what parts
it wants to call upon

in order to accomplish a new task.

The advantage of this is we can have
a very high-capacity model,

but it’s very efficient,

because we’re only calling
upon the parts that we need

for any given task.

So fixing these three things, I think,

will lead to a more powerful AI system:

instead of thousands of separate models,

train a handful of general-purpose models

that can do thousands
or millions of things.

Instead of dealing with single modalities,

deal with all modalities,

and be able to fuse them together.

And instead of dense models,
use sparse, high-capacity models,

where we call upon the relevant
bits as we need them.

We’ve been building a system
that enables these kinds of approaches,

and we’ve been calling
the system “Pathways.”

So the idea is this model
will be able to do

thousands or millions of different tasks,

and then, we can incrementally
add new tasks,

and it can deal
with all modalities at once,

and then incrementally learn
new tasks as needed

and call upon the relevant
bits of the model

for different examples or tasks.

And we’re pretty excited about this,

we think this is going
to be a step forward

in how we build AI systems.

But I also wanted
to touch on responsible AI.

We clearly need to make sure
that this vision of powerful AI systems

benefits everyone.

These kinds of models raise
important new questions

about how do we build them with fairness,

interpretability, privacy and security,

for all users in mind.

For example, if we’re going
to train these models

on thousands or millions of tasks,

we’ll need to be able to train them
on large amounts of data.

And we need to make sure that data
is thoughtfully collected

and is representative of different
communities and situations

all around the world.

And data concerns are only
one aspect of responsible AI.

We have a lot of work to do here.

So in 2018, Google published
this set of AI principles

by which we think about developing
these kinds of technology.

And these have helped guide us
in how we do research in this space,

how we use AI in our products.

And I think it’s a really helpful
and important framing

for how to think about these deep
and complex questions

about how we should
be using AI in society.

We continue to update these
as we learn more.

Many of these kinds of principles
are active areas of research –

super important area.

Moving from single-purpose systems
that kind of recognize patterns in data

to these kinds of general-purpose
intelligent systems

that have a deeper
understanding of the world

will really enable us to tackle

some of the greatest problems
humanity faces.

For example,

we’ll be able to diagnose more disease;

we’ll be able to engineer better medicines

by infusing these models
with knowledge of chemistry and physics;

we’ll be able to advance
educational systems

by providing more individualized tutoring

to help people learn
in new and better ways;

we’ll be able to tackle
really complicated issues,

like climate change,

and perhaps engineering
of clean energy solutions.

So really, all of these kinds of systems

are going to be requiring
the multidisciplinary expertise

of people all over the world.

So connecting AI
with whatever field you are in,

in order to make progress.

So I’ve seen a lot
of advances in computing,

and how computing, over the past decades,

has really helped millions of people
better understand the world around them.

And AI today has the potential
to help billions of people.

We truly live in exciting times.

Thank you.

(Applause)

Q&A with Chris Anderson

Chris Anderson: Thank you so much.

I want to follow up on a couple things.

This is what I heard.

Most people’s traditional picture of AI

is that computers recognize
a pattern of information,

and with a bit of machine learning,

they can get really good at that,
better than humans.

What you’re saying is those patterns

are no longer the atoms
that AI is working with,

that it’s much richer-layered concepts

that can include all manners
of types of things

that go to make up a leopard, for example.

So what could that lead to?

Give me an example
of when that AI is working,

what do you picture happening in the world

in the next five or 10 years
that excites you?

Jeff Dean: I think
the grand challenge in AI

is how do you generalize
from a set of tasks

you already know how to do

to new tasks,

as easily and effortlessly as possible.

And the current approach of training
separate models for everything

means you need lots of data
about that particular problem,

because you’re effectively trying
to learn everything

about the world
and that problem, from nothing.

But if you can build these systems

that already are infused with how to do
thousands and millions of tasks,

then you can effectively
teach them to do a new thing

with relatively few examples.

So I think that’s the real hope,

that you could then have a system
where you just give it five examples

of something you care about,

and it learns to do that new task.

CA: You can do a form
of self-supervised learning

that is based on remarkably
little seeding.

JD: Yeah, as opposed to needing
10,000 or 100,000 examples

to figure everything in the world out.

CA: Aren’t there kind of terrifying
unintended consequences

possible, from that?

JD: I think it depends
on how you apply these systems.

It’s very clear that AI
can be a powerful system for good,

or if you apply it in ways
that are not so great,

it can be a negative consequence.

So I think that’s why it’s important
to have a set of principles

by which you look at potential uses of AI

and really are careful and thoughtful
about how you consider applications.

CA: One of the things
people worry most about

is that, if AI is so good at learning
from the world as it is,

it’s going to carry forward
into the future

aspects of the world as it is
that actually aren’t right, right now.

And there’s obviously been
a huge controversy about that

recently at Google.

Some of those principles
of AI development,

you’ve been challenged that you’re not
actually holding to them.

Not really interested to hear
about comments on a specific case,

but … are you really committed?

How do we know that you are
committed to these principles?

Is that just PR, or is that real,
at the heart of your day-to-day?

JD: No, that is absolutely real.

Like, we have literally hundreds of people

working on many of these
related research issues,

because many of those
things are research topics

in their own right.

How do you take data from the real world,

that is the world as it is,
not as we would like it to be,

and how do you then use that
to train a machine-learning model

and adapt the data bit of the scene

or augment the data with additional data

so that it can better reflect
the values we want the system to have,

not the values that it sees in the world?

CA: But you work for Google,

Google is funding the research.

How do we know that the main values
that this AI will build

are for the world,

and not, for example, to maximize
the profitability of an ad model?

When you know everything
there is to know about human attention,

you’re going to know so much

about the little wriggly,
weird, dark parts of us.

In your group, are there rules
about how you hold off,

church-state wall
between a sort of commercial push,

“You must do it for this purpose,”

so that you can inspire
your engineers and so forth,

to do this for the world, for all of us.

JD: Yeah, our research group
does collaborate

with a number of groups across Google,

including the Ads group,
the Search group, the Maps group,

so we do have some collaboration,
but also a lot of basic research

that we publish openly.

We’ve published more
than 1,000 papers last year

in different topics,
including the ones you discussed,

about fairness, interpretability
of the machine-learning models,

things that are super important,

and we need to advance
the state of the art in this

in order to continue to make progress

to make sure these models
are developed safely and responsibly.

CA: It feels like we’re at a time
when people are concerned

about the power of the big tech companies,

and it’s almost, if there was ever
a moment to really show the world

that this is being done
to make a better future,

that is actually key to Google’s future,

as well as all of ours.

JD: Indeed.

CA: It’s very good to hear you
come and say that, Jeff.

Thank you so much for coming here to TED.

JD: Thank you.

(Applause)

Summary

Jeff Dean’s speech focuses on the transformative potential of AI, highlighting the significant progress made in the last decade. He discusses the capabilities of AI in various domains, such as computer vision, language understanding, and medical diagnosis. Dean attributes this progress to advancements in neural networks and computational power. However, he acknowledges several shortcomings in current AI systems, including their single-task nature, limited modalities, and dense architecture.

Dean proposes solutions to address these shortcomings, advocating for the development of multitask models capable of handling diverse tasks and modalities. He emphasizes the importance of sparse, high-capacity models that can efficiently activate relevant components for specific tasks. Dean introduces the concept of “Pathways,” a system designed to enable these approaches, which he believes will represent a significant step forward in AI research and development.

In addition to discussing technical advancements, Dean emphasizes the importance of responsible AI, highlighting the need for fairness, interpretability, privacy, and security in AI systems. He outlines Google’s AI principles as a guiding framework for responsible AI development and acknowledges the ongoing research and challenges in this area. Dean concludes his speech by expressing optimism about the potential of AI to address complex societal challenges and the importance of multidisciplinary collaboration in realizing this potential.

后记

2024年4月29日18点23分完成这篇演讲的学习。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1634372.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

Spirng 当中 Bean的作用域

Spirng 当中 Bean的作用域 文章目录 Spirng 当中 Bean的作用域每博一文案1. Spring6 当中的 Bean的作用域1.2 singleton 默认1.3 prototype1.4 Spring 中的 bean 标签当中scope 属性其他的值说明1.5 自定义作用域,一个线程一个 Bean 2. 总结:3. 最后: 每…

JAVA基础---Stream流

Stream流出现背景 背景 在Java8之前,通常用 fori、for each 或者 Iterator 迭代来重排序合并数据,或者通过重新定义 Collections.sorts的 Comparator 方法来实现,这两种方式对 大数量系统来说,效率不理想。 Java8 中添加了一个…

一款可视化正则表达式工具

regex-vis是一款在线免费且可视化的正则表达式工具 界面图: 只能输入由26个英文字母组成的字符串 ^[A-Za-z]$ 只能输入数字 ^[0-9]*$测试错误 测试正确 快来感受一下叭 官方网址: Regex VisRegex visualizer & editor, make the regular expr…

西门子:HMI小游戏-灰太狼与喜羊羊

DB块: HMI界面: 实际视频: 抓羊小游戏

docker各目录含义

目录含义builder构建docker镜像的工具或过程buildkit用于构建和打包容器镜像,官方构建引擎,支持多阶段构建、缓存管理、并行化构建和多平台构建等功能containerd负责容器生命周期管理,能起、停、重启,确保容器运行。负责镜管理&am…

电脑技巧:推荐一款非常好用的媒体播放器PotPlayer

目录 一、 软件简介 二、功能介绍 2.1 格式兼容性强 2.2 高清播放与硬件加速 2.3 自定义皮肤与界面布局 2.4 多音轨切换与音效增强 2.5 字幕支持与编辑 2.6 视频截图与录像 2.7 网络流媒体播放 三、软件特色 四、使用技巧 五、总结 一、 软件简介 PotPlayer播放器 …

【面试经典 150 | 回溯】括号生成

文章目录 写在前面Tag题目来源解题思路方法一:暴力法方法二:回溯 写在最后 写在前面 本专栏专注于分析与讲解【面试经典150】算法,两到三天更新一篇文章,欢迎催更…… 专栏内容以分析题目为主,并附带一些对于本题涉及到…

compose调用系统分享功能分享图片文件

compose调用系统分享功能图片文件 简介UI界面提供给外部程序的文件访问权限创建FileProvider设置共享文件夹 通用分享工具虚拟机验证结果参考 本系列用于新人安卓基础入门学习笔记,有任何不同的见解欢迎留言 运行环境 jdk17 andriod 34 compose material3 简介 本案…

JAVA面试专题-Redis

你在最近的项目中哪些场景使用了Redis 缓存 缓存穿透 缓存穿透:查询一个不存在的数据,mysql查询不到数据也不好直接写入缓存,导致每次请求都查数据库。 解决方案一:缓存空数据,即使查询返回的数据为空,也把…

xss漏洞学习

1.xss漏洞简介 跨站脚本(Cross-Site Scripting),本应该缩写为CSS,但是该缩写已被层叠样式脚本Cascading Style Sheets所用,所以改简称为XSS。也称跨站脚本或跨站脚本攻击。 原理:跨站脚本攻击XSS通过将恶…

SQLite如何处理CSV 虚拟表(三十七)

返回:SQLite—系列文章目录 上一篇:SQLite的DBSTAT 虚拟表(三十六) 下一篇:SQLite的扩展函数Carray()表值函数(三十八) ​ RFC4180格式是一种文本文件格式,被用于表格数据间的交互,也可将表格数据转化…

Java进阶-JINQ详解与使用

本文详细介绍了JINQ(Java Integrated Query),一种强化Java中数据查询能力的库,提供类SQL的查询语法和类型安全的操作。文章首先解释了JINQ的基本功能和应用,随后通过具体示例展示了如何使用JINQ进行数据过滤、投影、连…

Python自学篇3-PyCharm开发工具下载、安装及应用

一、Python开发工具 自学篇1中讲到了安装Python之后出现的几个应用程序,其中IDLE、Python.exe都可以用来编写python程序,也可以进行调试;但是比较基础,比较原始,调试不方便,界面也不友好,需要更…

天空卫士旗舰产品入选《网络安全专用产品指南》

权威认证 近日,中国网络安全产业联盟(CCIA)发布了第一版《网络安全专用产品指南》。这一权威指南中,天空卫士荣获殊荣,旗下三款尖端产品荣耀入选,分别是增强型Web安全网关(ASWG)、数…

解析Redis Key Prefix配置之谜:双冒号“::”的由来与作用

前言 在使用Spring Boot集成Redis进行应用开发时,为了增强缓存键的可读性和管理性,我们常常会在配置文件中设定一个全局的key-prefix。如果你发现存储至Redis的键自动附加了“::”,本文将深入探讨这一现象背后的原因,解析Spring …

长难句打卡4.29

If appropriate public policies were in place to help all women—whether CEOs or their children’s caregivers—and all families, Sandberg would be no more newsworthy than any other highly capable person living in a more just society 如果能制定适当的公共政策…

解决Pycharm全局搜索与输入法简繁切换快捷键冲突问题

Pycharm中全局搜索快捷键Ctrl Shift F 如图所示: 微软输入法简繁切换快捷键设置: 解决办法: 关掉输入法的切换功能即可,或者更改简繁切换快捷键,毕竟简繁切换使用频率极低。

智慧农业设备——虫情监测系统

随着科技的不断进步和农业生产的日益现代化,智慧农业成为了新时代农业发展的重要方向。其中,虫情监测系统作为智慧农业的重要组成部分,正逐渐受到广大农户和农业专家的关注。 虫情监测系统是一种基于现代传感技术、图像识别技术和大数据分析技…

Django-admin组件

Django-admin组件 admin是django中提供的一套可视化工具:用于对ORM中定义的表进行增删改查。 1 概览 在django项目启动时,自动找到注册到admin中的所有model中定义的类,然后为这些类生成一系列的URL和视图函数,实现基本增删改查…

202009青少年软件编程(Python)等级考试试卷(一级)

第 1 题 【单选题】 Python自带的编程环境是?( ) A :PyScripter B :Spyder C :Notepad D :IDLE 正确答案:D 试题解析: 第 2 题 【单选题】 假设a2,b3,那么a-b*b的值是?( ) A :-3 B :-2 C :-7 D :-11 正确答案:C 试题…