跟TED演讲学英文:Your right to repair AI systems by Rumman Chowdhury

news2024/12/24 9:51:41

Your right to repair AI systems

在这里插入图片描述

Link: https://www.ted.com/talks/rumman_chowdhury_your_right_to_repair_ai_systems

Speaker: Rumman Chowdhury

Date: April 2024

文章目录

  • Your right to repair AI systems
    • Introduction
    • Vocabulary
    • Summary
    • Transcript
    • Afterword

Introduction

For AI to achieve its full potential, non-experts need to contribute to its development, says Rumman Chowdhury, CEO and cofounder of Humane Intelligence. She shares how the right-to-repair movement of consumer electronics provides a promising model for a path forward, with ways for everyone to report issues, patch updates or even retrain AI technologies.

人类智能公司(Humane Intelligence)首席执行官兼联合创始人鲁曼-乔杜里(Rumman Chowdhury)说,要想充分发挥人工智能的潜力,非专业人士必须为其发展做出贡献。她分享了消费电子产品的 "维修权 "运动如何为未来的道路提供了一个很有前景的模式,让每个人都有办法报告问题、更新补丁,甚至重新训练人工智能技术。

Vocabulary

crop yields:农作物产量

farmers would have to wait for weeks while their crops rot and pests took over. 农民们将不得不等待数周,直到他们的庄稼腐烂,害虫接管。

ride-share drivers 拼车司机

measle:美 ['mizl] 麻疹

mump:美 [mʌmp] 腮腺炎

other diseases like measles, mumps and the flu 麻疹、腮腺炎和流感等其他疾病

resoundingly:响亮地

specs:规格

could they imagine a modern AI system that would be able to design the specs of a modern art museum? The answer, resoundingly, was no. “他们能想象出一个能够设计现代艺术博物馆规格的现代AI系统吗?答案是响亮的‘不’。”

Now architects are liable if something goes wrong with their buildings. They could lose their license, they could be fined, they could even go to prison. 现在,如果建筑师的建筑出了问题,他们就要承担责任。他们可能会被吊销执照,被罚款,甚至会进监狱。

evacuation:美 [ɪˌvækjuˈeɪʃ(ə)n] 撤离;疏散;撤退

exit doors that open the wrong way, leading to people being crushed in an evacuation crisis 出口门打开方式错误,导致人们在疏散危机中被挤压

shatter:粉碎;破坏;破掉

the wind blows too hard and shatters windows. 风吹得太猛,吹碎了窗户。

agentic AI:代理式AI

tipping point:临界点;引爆点;爆发点;忍受极限

The next wave of artificial intelligence systems, called agentic AI, is a true tipping point between whether or not we retain human agency, or whether or not AI systems make our decisions for us. “下一波人工智能系统,被称为代理型AI,是一个真正的临界点,决定了我们是否保留人类自主权,还是让AI系统为我们做决定。”

medication:药物

a medical agent might determine whether or not your family needs doctor’s appointments, it might refill prescription medications, or in case of an emergency, send medical records to the hospital. 医疗代理可能会决定您的家人是否需要预约医生,可能会重新配药,或者在紧急情况下将医疗记录发送到医院。

What professional would trust an AI system with job decisions, unless you could retrain it the way you might a junior employee? “哪个专业人士会信任一个AI系统来做工作决策,除非你能够像培训一名初级员工那样对它进行再培训?”

这个句子might后面省略了动词,解释如下:

在这个句子里,“the way you might a junior employee” 确实省略了动词,但这是英语中一种常见的简略表达,称为“ellipsis”(省略)。完整的表达应该是:

“the way you might train a junior employee.”

省略动词“train”是为了避免重复,因为上下文已经很清楚地表明了动词的含义。这种省略让句子更加简洁,而不影响理解。

再举一个例子:

“If you treat the project the way you would a major client, it will succeed.”
(如果你像对待重要客户那样对待这个项目,它就会成功。)

完整的句子应该是:
“If you treat the project the way you would treat a major client, it will succeed.”

同样,第二个“treat”动词被省略了,因为上下文已经清楚地说明了动词的含义。

intrepid:美 [ɪnˈtrɛpəd] 勇敢的;无畏的;

Or you could be like these intrepid farmers and learn to program and fine-tune your own systems 或者你可以像这些勇敢的农民一样学习编程和微调自己的系统

Summary

Rumman Chowdhury, CEO and cofounder of Humane Intelligence, begins her talk by highlighting the intersection of artificial intelligence (AI) and farming technology. She discusses the advancements such as computer vision predicting crop yields and AI identifying pests. However, she notes the challenges faced by farmers, exemplified by the controversy over John Deere’s smart tractors which restricted farmers’ ability to repair their own equipment. This led to a movement called “right to repair,” advocating for the ability to repair one’s own technology, whether it’s tractors or household devices. Chowdhury emphasizes that this right should extend to AI systems to ensure that people can fix and trust the technologies they use.

Chowdhury then addresses the declining public confidence in AI, citing polls that show widespread concern about the technology’s impact. She explains that people feel alienated because their data is used without consent to create systems that affect their lives, and they lack a voice in how these systems are built. To bridge this gap, she proposes the concept of red teaming, a practice from cybersecurity where external experts test and find flaws in systems. She highlights successful examples of red-teaming exercises with scientists and architects, which led to improvements in AI models and demonstrated the need for AI systems that interact with and are trusted by users.

In her concluding remarks, Chowdhury emphasizes the importance of involving people in the AI development process to build trust and ensure the technology benefits everyone. She introduces the idea of a “right to repair” for AI, suggesting tools like diagnostics boards and collaborations with ethical hackers to allow users to understand and improve AI systems. Chowdhury stresses that the potential of AI can only be realized if developers and users work together. She calls for a shift in focus from merely building trustworthy AI to creating tools that empower people to make AI work for them, asserting that technologists alone cannot achieve this goal without public involvement.

Humane Intelligence的首席执行官兼联合创始人Rumman Chowdhury在她的演讲中首先强调了人工智能(AI)和农业技术的交叉点。她讨论了计算机视觉预测作物产量和AI识别害虫等进步。然而,她也指出农民面临的挑战,例如John Deere的智能拖拉机争议,该公司限制农民自行修理设备的能力。这导致了一个名为“维修权”的运动,倡导能够修理自己的技术设备,无论是拖拉机还是家用设备。Chowdhury强调,这种权利应扩展到AI系统,以确保人们能够修理和信任他们所使用的技术。

Chowdhury接着谈到了公众对AI信任度的下降,引用了一些调查显示广泛存在的对技术影响的担忧。她解释说,人们感到被疏远,因为他们的数据在未经同意的情况下被用来创建影响他们生活的系统,而且他们在这些系统的构建过程中没有发言权。为了弥合这一差距,她提出了“红队”(red teaming)的概念,这是一种来自网络安全领域的实践,外部专家会测试和找出系统中的漏洞。她强调了一些成功的红队演练案例,如与科学家和建筑师的合作,这些演练改善了AI模型并展示了需要与用户互动并受其信任的AI系统。

在她的总结发言中,Chowdhury强调了让人们参与AI开发过程的重要性,以建立信任并确保技术惠及所有人。她提出了AI的“维修权”概念,建议使用诊断板和与道德黑客的合作,让用户理解和改进AI系统。Chowdhury强调,只有在开发者和用户共同努力的情况下,AI的潜力才能得以实现。她呼吁从仅仅构建可信的AI转向创建能让人们使AI为他们服务的工具,坚称仅靠技术人员无法实现这一目标,必须有公众的参与。

Transcript

I want to tell you a story

about artificial intelligence and farmers.

Now, what a strange combination, right?

Two topics could not sound
more different from each other.

But did you know that modern farming
actually involves a lot of technology?

So computer vision is used
to predict crop yields.

And artificial intelligence
is used to find,

identify and get rid of insects.

Predictive analytics helps figure out
extreme weather conditions

like drought or hurricanes.

But this technology
is also alienating to farmers.

And this all came to a head in 2017

with the tractor company John Deere
when they introduced smart tractors.

So before then,
if a farmer’s tractor broke,

they could just repair it themselves
or take it to a mechanic.

Well, the company actually made it illegal

for farmers to fix their own equipment.

You had to use a licensed technician

and farmers would have to wait for weeks

while their crops rot and pests took over.

So they took matters into their own hands.

Some of them learned to program,

and they worked with hackers to create
patches to repair their own systems.

In 2022,

at one of the largest hacker
conferences in the world, DEFCON,

a hacker named Sick Codes and his team

showed everybody how to break
into a John Deere tractor,

showing that, first of all,
the technology was vulnerable,

but also that you can and should
own your own equipment.

To be clear, this is illegal,

but there are people
trying to change that.

Now that movement is called
the “right to repair.”

The right to repair
goes something like this.

If you own a piece of technology,

it could be a tractor, a smart toothbrush,

a washing machine,

you should have the right
to repair it if it breaks.

So why am I telling you this story?

The right to repair needs to extend
to artificial intelligence.

Now it seems like every week

there is a new and mind-blowing
innovation in AI.

But did you know that public confidence
is actually declining?

A recent Pew poll showed
that more Americans are concerned

than they are excited
about the technology.

This is echoed throughout the world.

The World Risk Poll shows

that respondents from Central
and South America and Africa

all said that they felt AI would lead
to more harm than good for their people.

As a social scientist and an AI developer,

this frustrates me.

I’m a tech optimist

because I truly believe
this technology can lead to good.

So what’s the disconnect?

Well, I’ve talked to hundreds
of people over the last few years.

Architects and scientists,
journalists and photographers,

ride-share drivers and doctors,

and they all say the same thing.

People feel like an afterthought.

They all know that their data is harvested
often without their permission

to create these sophisticated systems.

They know that these systems
are determining their life opportunities.

They also know that nobody
ever bothered to ask them

how the system should be built,

and they certainly have no idea
where to go if something goes wrong.

We may not own AI systems,

but they are slowly dominating our lives.

We need a better feedback loop

between the people
who are making these systems,

and the people who are best
determined to tell us

how these AI systems
should interact in their world.

One step towards this
is a process called red teaming.

Now, red teaming is a practice
that was started in the military,

and it’s used in cybersecurity.

In a traditional red-teaming exercise,

external experts are brought in
to break into a system,

sort of like what Sick Codes did
with tractors, but legal.

So red teaming acts as a way
of testing your defenses

and when you can figure out
where something will go wrong,

you can figure out how to fix it.

But when AI systems go rogue,

it’s more than just a hacker breaking in.

The model could malfunction
or misrepresent reality.

So, for example, not too long ago,

we saw an AI system attempting diversity

by showing historically inaccurate photos.

Anybody with a basic
understanding of Western history

could have told you
that neither the Founding Fathers

nor Nazi-era soldiers
would have been Black.

In that case, who qualifies as an expert?

You.

I’m working with thousands of people
all around the world

on large and small red-teaming exercises,

and through them we found
and fixed mistakes in AI models.

We also work with some of the biggest
tech companies in the world:

OpenAI, Meta, Anthropic, Google.

And through this, we’ve made models
work better for more people.

Here’s a bit of what we’ve learned.

We partnered with the Royal Society
in London to do a scientific,

mis- and disinformation event
with disease scientists.

What these scientists found

is that AI models actually had
a lot of protections

against COVID misinformation.

But for other diseases like measles,
mumps and the flu,

the same protections didn’t apply.

We reported these changes,

they’re fixed and now
we are all better protected

against scientific mis-
and disinformation.

We did a really similar exercise
with architects at Autodesk University,

and we asked them a simple question:

Will AI put them out of a job?

Or more specifically,

could they imagine a modern AI system

that would be able to design the specs
of a modern art museum?

The answer, resoundingly, was no.

Here’s why, architects do more
than just draw buildings.

They have to understand physics
and material science.

They have to know building codes,

and they have to do that

while making something
that evokes emotion.

What the architects wanted
was an AI system

that interacted with them,
that would give them feedback,

maybe proactively offer
design recommendations.

And today’s AI systems,
not quite there yet.

But those are technical problems.

People building AI are incredibly smart,

and maybe they could solve
all that in a few years.

But that wasn’t their biggest concern.

Their biggest concern was trust.

Now architects are liable if something
goes wrong with their buildings.

They could lose their license,

they could be fined,
they could even go to prison.

And failures can happen
in a million different ways.

For example, exit doors
that open the wrong way,

leading to people being crushed
in an evacuation crisis,

or broken glass raining down
onto pedestrians in the street

because the wind blows too hard
and shatters windows.

So why would an architect trust
an AI system with their job,

with their literal freedom,

if they couldn’t go in
and fix a mistake if they found it?

So we need to figure out these problems
today, and I’ll tell you why.

The next wave of artificial intelligence
systems, called agentic AI,

is a true tipping point

between whether or not
we retain human agency,

or whether or not AI systems
make our decisions for us.

Imagine an AI agent as kind of
like a personal assistant.

So, for example,
a medical agent might determine

whether or not your family needs
doctor’s appointments,

it might refill prescription medications,
or in case of an emergency,

send medical records to the hospital.

But AI agents can’t and won’t exist

unless we have a true right to repair.

What parent would trust
their child’s health to an AI system

unless you could run
some basic diagnostics?

What professional would trust
an AI system with job decisions,

unless you could retrain it
the way you might a junior employee?

Now, a right to repair
might look something like this.

You could have a diagnostics board

where you run basic tests that you design,

and if something’s wrong,
you could report it to the company

and hear back when it’s fixed.

Or you could work with third parties
like ethical hackers

who make patches for systems
like we do today.

You can download them and use them
to improve your system

the way you want it to be improved.

Or you could be like these intrepid
farmers and learn to program

and fine-tune your own systems.

We won’t achieve the promised benefits
of artificial intelligence

unless we figure out how to bring people
into the development process.

I’ve dedicated my career
to responsible AI,

and in that field we ask the question,

what can companies build
to ensure that people trust AI?

Now, through these red-teaming exercises,
and by talking to you,

I’ve come to realize that we’ve been
asking the wrong question all along.

What we should have been asking
is what tools can we build

so people can make AI beneficial for them?

Technologists can’t do it alone.

We can only do it with you.

Thank you.

(Applause)

Afterword

2024年6月6日18点01分于上海。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1794397.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

电脑响度均衡是什么?它如何开启?

什么是响度均衡 响度均衡(Loudness Equalization)是一种音频处理技术,旨在平衡音频信号的响度水平,使得不同音源在播放时具有相似的响度感受。简单来说,它可以让用户在播放不同音轨或音频内容时,不需要频繁…

IGraph使用实例——线性代数计算(blas)

1 概述 在图论中,BLAS(Basic Linear Algebra Subprograms)并不直接应用于图论的计算,而是作为一套线性代数计算中通用的基本运算操作函数集合,用于进行向量和矩阵的基本运算。然而,这些基本运算在图论的相…

深度神经网络——什么是扩散模型?

1. 概述 在人工智能的浩瀚领域中,扩散模型正成为技术创新的先锋,它们彻底改变了我们处理复杂问题的方式,特别是在生成式人工智能方面。这些模型基于高斯过程、方差分析、微分方程和序列生成等坚实的数学理论构建。 业界巨头如Nvidia、Google…

python API自动化(接口测试基础与原理)

1.接口测试概念及应用 什么是接口 接口是前后端沟通的桥梁,是数据传输的通道,包括外部接口、内部接口,内部接口又包括:上层服务与下层服务接口,同级接口 外部接口:比如你要从 别的网站 或 服务器 上获取 资源或信息 &a…

网站调用Edge浏览器API:https://api-edge.cognitive.microsofttranslator.com/translate

Edge浏览器有自带的翻译功能,在运行pc项目可能会遇到疯狂调用Edge的API https://api-edge.cognitive.microsofttranslator.com/translate 这个URL(https://api-edge.cognitive.microsofttranslator.com/translate)指向的是微软服务中的API接…

OpenCV中的圆形标靶检测——背景概述

圆形标靶 如下图所示,相机标定中我们使用带有固定间距图案阵列的平板,来得到高精度的标靶像素坐标,进而计算得到相机的内参、畸变系数,相机之间的变换关系,和相机与世界坐标系的变换关系(即外参)。 不过标靶的形式多样,从图案类型来看常见的有棋盘格、圆形标靶…

Paper Survey——3DGS-SLAM

之前博客对多个3DGS SLAM的工作进行了复现及代码解读 学习笔记之——3DGS-SLAM系列代码解读_gs slam-CSDN博客文章浏览阅读1.9k次,点赞15次,收藏45次。最近对一系列基于3D Gaussian Splatting(3DGS)SLAM的工作的源码进行了测试与…

windows根据时间自定义默认应用模式

Target 将“默认应用模式“能否设置为早上7点为“亮”,到了晚上7点设置为“暗”,每天都执行以下这个任务。 这样我的很多应用软件(e.g., chrome, explorer)就可以到点变黑,到点变白了 ChatGPT answer (亲测有效): 你可以使用Windows的任务计…

记录遇见的小问题

1&#xff0c;angularjs 使用bootstrap时&#xff0c;遇见模态框怎么点击空白处不关闭&#xff1b; <div id"dialog-modal" data-backdrop"static" data-keyboard"false"> 但是在实际使用过程中调用了一个html 需要在 js里加 $scope.Up…

【Elasticsearch】es基础入门-03.RestClient操作文档

RestClient操作文档 示例&#xff1a; 一.初始化JavaRestClient &#xff08;一&#xff09;引入es的RestHighLevelClient依赖 <!--elasticsearch--> <dependency><groupId>org.elasticsearch.client</groupId><artifactId>elasticsearch-rest…

【UE+GIS】UE5GIS CAD或shp构建3D地形

贴合地形的矢量图形实现方法 一、灰度图的制作和拉伸换算1、基于高程点集实现2、基于等高线实现3、拉伸计算 二、生成地形模型的实现方案1、3Dmax导入灰度图2、使用ArcMap/Arcpro/FME等GIS数据处理工具3、UE导入灰度图 三、地形上叠加地形渲染效果的实现方案1、贴花2、数据渲染…

矩阵链相乘(动态规划法)

问题分析 矩阵链相乘问题是一个经典的动态规划问题。给定一系列矩阵&#xff0c;目标是找到一种最优的乘法顺序&#xff0c;使得所有矩阵相乘所需的标量乘法次数最少。矩阵链相乘问题的关键在于利用动态规划来避免重复计算子问题。 算法设计 定义子问题&#xff1a;设 &…

ETL or iPaaS,企业数据集成工具选择攻略

随着信息技术的飞速发展&#xff0c;企业对于数据的处理和分析需求愈发强烈&#xff0c;数据集成作为实现数据价值的重要手段&#xff0c;其技术和工具的选择成为业界关注的焦点。 传统ETL&#xff08;Extract, Transform, Load&#xff09;数据集成方法长期以来被广泛应用。然…

探索数据结构:堆,计数,桶,基数排序的分析与模拟实现

✨✨ 欢迎大家来到贝蒂大讲堂✨✨ &#x1f388;&#x1f388;养成好习惯&#xff0c;先赞后看哦~&#x1f388;&#x1f388; 所属专栏&#xff1a;数据结构与算法 贝蒂的主页&#xff1a;Betty’s blog 1. 堆排序 1.1. 算法思想 堆排序(Heap Sort)是一种基于堆数据结构的排…

在IDEA中使用Git在将多次commit合并为一次commit

案例&#xff1a; 我想要将master分支中的 测试一、测试二、测试三三次commit合并为一次commit 1. 点击Git 2. 双击点击commit所在的分支 3. 右键要合并的多个commit中的第一次提交的commit 4. 点击右键后弹出的菜单中的Interactively Rebase From Here选项 5. 点击测试二…

家政服务小程序,提高企业在市场中的竞争力

近几年&#xff0c;人们对家政的需求持续增加&#xff0c;面对小程序的快速发展&#xff0c;互联网家政的模式成为了市场新的发展方向&#xff0c;越来越多的居民也开始在线上预约家政服务。随着当下人们对家政的需求日益提升&#xff0c;线上家政小程序利用各种信息技术&#…

2024年华为OD机试真题-多段线数据压缩-C++-OD统一考试(C卷D卷)

2024年OD统一考试(D卷)完整题库:华为OD机试2024年最新题库(Python、JAVA、C++合集)​ 题目描述: 下图中,每个方块代表一个像素,每个像素用其行号和列号表示。 为简化处理,多段线的走向只能是水平、竖直、斜向45度。 上图中的多段线可以用下面的坐标串表示:(2, 8), (3…

webgl_effects_stereo

ThreeJS 官方案例学习&#xff08;webgl_effects_stereo&#xff09; 1.效果图 2.源码 <template><div><div id"container"></div></div> </template> <script> import * as THREE from three; // 导入控制器 import { …

锐捷校园网自助服务系统 login_judge.jsf 任意文件读取漏洞复现(XVE-2024-2116)

0x01 产品简介 锐捷校园网自助服务系统是锐捷网络推出的一款面向学校和校园网络管理的解决方案。该系统旨在提供便捷的网络自助服务,使学生、教职员工和网络管理员能够更好地管理和利用校园网络资源。 0x02 漏洞概述 校园网自助服务系统/selfservice/selfservice/module/sc…

Linux Kernel nf_tables 本地权限提升漏洞(CVE-2024-1086)

文章目录 前言声明一、netfilter介绍二、漏洞成因三、漏洞危害四、影响范围五、漏洞复现六、修复方案临时解决方案升级修复方案 前言 2024年1月&#xff0c;各Linux发行版官方发布漏洞公告&#xff0c;修复了一个 netfilter:nf_tables 模块中的释放后重用漏洞&#xff08;CVE-…