Benedict Evans:Ways to think about AGI思考 AGI 的方法:

news2024/11/15 1:53:13

​Benedict Evans本文发布于2024 年 5 月 4 日


How do we think about a fundamentally unknown and unknowable risk, when the experts agree only that they have no idea?
当专家们一致认为他们一无所知时,我们如何看待根本上未知和不可知的风险?

The manuscript for ‘A Logic Named Joe’
《乔的逻辑》手稿

In 1946, my grandfather, writing as ‘Murray Leinster’, published a science fiction story called ‘A Logic Named Joe’. Everyone has a computer (a ‘logic’) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, ‘Joe’, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - ‘Check your censorship circuits!’ - until they work out what to unplug. (My other grandfather, meanwhile, was using computers to spy on the Germans, and then the Russians.)
1946 年,我的祖父以“Murray Leinster”的名义出版了一部科幻小说,名为《名为乔的逻辑》。每个人都有一台连接到全球网络的计算机(一个“逻辑”),可以执行从银行业务到报纸和视频通话的所有操作。有一天,这些逻辑之一“乔”开始对网络上任何地方的任何请求提供有用的答案:例如,发明一种无法检测到的毒药,或者提出抢劫银行的最佳方法。恐慌随之而来——“检查你的审查电路!”——直到他们弄清楚要拔掉什么。 (与此同时,我的另一位祖父正在使用计算机监视德国人,然后是俄罗斯人。)

For as long as we’ve thought about computers, we’ve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of ‘artificial intelligence’, and wondered what that would mean, and indeed, what we’re trying to say with the word ‘intelligence’. There’s an old joke that ‘AI’ is whatever doesn’t work yet, because once it works, people say ‘that’s not AI - it’s just software’. Calculators do super-human maths, and databases have super-human memory, but they can’t do anything else, and they don’t understand what they’re doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are ‘super-human’ but they’re just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as ‘general intelligence’ and hence making it would be ‘artificial general intelligence’ - AGI.
自从我们思考计算机以来,我们就想知道它们是否能够从纯粹的机器、打孔卡和数据库跳跃到某种“人工智能”,并想知道这意味着什么,事实上, ,我们想用“智能”这个词来表达什么。有一个老笑话说,“人工智能”是指还没有发挥作用的东西,因为一旦它发挥作用,人们就会说“这不是人工智能——这只是软件”。计算器具有超人的数学能力,数据库具有超人的记忆力,但它们不能做任何其他事情,而且它们不明白自己在做什么,就像洗碗机了解盘子或钻头了解孔一样。钻机只是一台机器,数据库是“超人”,但它们只是软件。不知何故,人们有一些不同的东西,在某种程度上,狗、黑猩猩、章鱼和许多其他生物也是如此。人工智能研究人员开始将其称为“通用智能”,因此将其称为“通用人工智能”——AGI。

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.
如果我们真的能够在软件中创造出与人类智能同等的东西,那么显然这将是一件非常大的事情。我们能开发出能够推理、计划和理解的软件吗?至少,这将是我们可以实现自动化的巨大变化,正如我的祖父和其他一千位科幻小说作家所指出的那样,这可能意味着更多。

Every few decades since 1946, there’s been a wave of excitement that sometime like this might be close, each time followed by disappointment and an ‘AI Winter’, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in “from three to eight years we will have a machine with the general intelligence of an average human being”, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didn’t work).
自 1946 年以来,每隔几十年,就会出现一波兴奋的浪潮,有时这样的时刻可能即将到来,但每次随之而来的是失望和“人工智能冬天”,因为当时的技术进展速度放缓,我们意识到我们需要一个未知的数字未知的进一步突破。 1970 年,人工智能先驱马文·明斯基 (Marvin Minsky) 声称,“三到八年内,我们将拥有一台具有普通人类一般智能的机器”,但每次我们认为我们有一种方法可以实现这一目标时,结果却是:它只是更多的软件(或者只是不起作用)。

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer. At the extreme, the so-called ‘doomers’ argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (‘This is very dangerous and we are building it as fast as possible, but don’t let anyone else do it’), but plenty of it is sincere.  
众所周知,18个月前兴起的大型语言模型(LLMs)又掀起了另一波这样的浪潮。严肃的人工智能科学家以前认为通用人工智能可能还需要几十年的时间,现在他们认为它可能更近了。在极端情况下,所谓的“末日论者”认为,当前的研究确实存在自发出现通用人工智能的风险,这可能对人类构成威胁,并呼吁政府采取紧急行动。其中一些来自自私的公司寻求竞争壁垒(“这是非常危险的,我们正在尽快建造它,但不要让其他人这样做”),但很多都是真诚的。

(I should point out, incidentally, that the doomers’ ‘existential risk’ concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)
(顺便说一句,我应该指出,末日论者的“存在风险”担心 AGI 可能想要并且能够摧毁或控制人类,或者将我们视为宠物,这与更常见的担忧完全无关,例如,政府将如何使用人工智能进行人脸识别,或谈论人工智能偏见,或人工智能深度假货,以及人们滥用人工智能或搞砸人工智能的所有其他方式,就像他们对待其他所有技术一样。)

However, for every expert that thinks that AGI might now be close, there’s another who doesn’t. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.
然而,对于每一位认为通用人工智能现在可能已经接近实现的专家来说,还有另一位专家不这么认为。有些人认为LLMs可能会一路扩展到通用人工智能,而另一些人则再次认为我们仍然需要未知数量的未知进一步突破。

More importantly, they would all agree that they don’t actually know. This is why I used terms like ‘might’ or ‘may’ - our first stop is an appeal to authority (often considered a logical fallacy, for what that’s worth), but the authorities tell us that they don’t know, and don’t agree.
更重要的是,他们都会同意他们实际上并不知道。这就是为什么我使用“可能”或“可能”等术语 - 我们的第一站是诉诸权威(通常被认为是逻辑谬误,因为它的价值),但权威告诉我们他们不知道,也不知道不同意。

They don’t know, either way, because we don’t have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we don’t know why LLMs seem to work so well, and we don’t know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we don’t know why they work. We have many theories for parts of these, but we don’t know the system. Absent an appeal to religion, we don’t know of any reason why AGI cannot be created (it doesn’t appear to violate any law of physics), but we don’t know how to create it or what it is, except as a concept.
不管怎样,他们不知道,因为我们没有一个关于一般智力到底是什么的连贯的理论模型,也不知道为什么人似乎比狗更擅长,也不知道人或狗与乌鸦到底有什么不同。章鱼。同样,我们不知道为什么 LLMs 看起来效果这么好,也不知道它们可以改进多少。我们在基本和机械层面上了解神经元和令牌,但我们不知道它们为何起作用。对于其中的某些部分,我们有很多理论,但我们不了解这个系统。如果没有宗教诉求,我们不知道 AGI 不能被创建的任何原因(它似乎不违反任何物理定律),但我们不知道如何创建它或它是什么,除了一个概念。

And so, some experts look at the dramatic progress of LLMs and say ‘perhaps!’ and other say ‘perhaps, but probably not!’, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.
因此,一些专家看到LLMs的巨大进展并说“也许!”而其他专家则说“也许,但可能不是!”,这从根本上来说是一种直观和本能的评估,而不是科学的评估。

Indeed, ‘AGI’ itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.
事实上,“AGI”本身就是一个思想实验,或者,有人可能认为,它是一个占位符。因此,我们必须小心循环定义,以及将某物定义为存在、确定性或不可避免。

If we start by defining AGI as something that is in effect a new life form, equal to people in ‘every’ way (barring some sense of physical form), even down to concepts like ‘awareness’, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, you’ve just begged the question.
如果我们首先将 AGI 定义为实际上是一种新的生命形式,在“各个”方面与人平等(除了某种物理形式),甚至包括“意识”、情感和权利等概念,然后假设如果能够访问更多计算,它会更加智能(并且地球上甚至还有更多可用的备用计算),并且假设它可以立即突破任何控制,那么这听起来很危险,但实际上,你'我只是提出这个问题。

As Anselm demonstrated, if you define God as something that exists, then you’ve proved that God exists, but you won’t persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselm’s proof was invalid) but you cannot create knowledge like that.
正如安瑟姆所证明的,如果你将上帝定义为存在的东西,那么你就证明了上帝存在,但你无法说服任何人。事实上,很多 AGI 对话听起来就像过去一些神学家和哲学家试图通过第一原理推理来推断上帝的本质。你的论点的内部逻辑可能非常强大(哲学家花了几个世纪才弄清楚为什么安瑟姆的证明无效),但你不能像那样创造知识。

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesn’t of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say ‘people were wrong about X in the past so they must be wrong about Y now’, and the fact that leading AI scientists were wrong before absolutely does not tell us they’re wrong now, but it does tell us to hesitate. They can all be wrong at the same time.
同样,你可以调查大量人工智能科学家,了解他们的不确定性,并得出统计上准确的结果平均值,但这本身并不能产生确定性,就像调查统计上准确的神学家样本不会产生确定性一样上帝的本质,或者也许,将足够多的次级抵押贷款捆绑在一起可以产生 AAA 债券,这是通过平均不确定性来产生确定性的另一种尝试。预测技术的最基本的谬误之一是说“人们过去对 X 的看法是错误的,所以他们现在对 Y 的看法一定是错误的”,而领先的人工智能科学家以前错了这一事实绝对不能告诉我们他们错了现在,但它确实告诉我们要犹豫。他们可能同时都错了。

Meanwhile, how do you know that’s what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, there’s no a priori reason why it must be interesting. ‘God’ might be real, and boring, and not care about us, and we don’t know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence ‘just’ about speed?). We might produce general intelligence that’s hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We don’t know. 
与此同时,你怎么知道这就是一般智力的样子?以赛亚·柏林曾经提出,即使假设宇宙原则上有一个目的,并且原则上它是可发现的,也没有先验的理由说明它一定是有趣的。 “上帝”可能是真实的,而且很无聊,并不关心我们,而且我们不知道我们会得到什么样的通用人工智能。它的智能可能比人高 100 倍,或者可能速度更快,但并没有变得更智能(智能“仅仅”与速度有关吗?)。我们可能会产生非常有用的通用智能,但并不比狗聪明,毕竟狗确实具有通用智能,并且像数据库或计算器一样,具有超人的能力(气味)。我们不知道。

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about ‘general intelligence’ as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the ‘general intelligence’ of Llama 6 or ChatGPT 7 and say “That’s not AGI, it’s just software!” We created the term AGI because AI came just to mean software, and perhaps ‘AGI’ will be the same, and we’'ll need to invent another term.
更进一步,当我听马克·扎克伯格谈论 Llama 3 时,我惊讶地发现他所说的“通用智能”将分阶段出现,每次会以不同的方式出现。也许人们会指着 Llama 6 或 ChatGPT 7 的“通用智能”说“这不是 AGI,这只是软件!”我们创造了“AGI”这个术语,因为人工智能只是意味着软件,也许“AGI”也是一样的,我们需要发明另一个术语。

This fundamental uncertainty, even at the level of what we’re talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isn’t fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we don’t know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly it’s been a very good thing that we should want much more of.
这种根本性的不确定性,即使是在我们正在谈论的层面上,也许就是为什么所有关于通用人工智能的讨论似乎都转向了类比。如果你可以将其与核裂变进行比较,那么你就知道会发生什么,并且知道该怎么做。但这不是裂变,也不是生物武器,也不是陨石。这是一种软件,它可能会或可能不会变成通用人工智能,它可能有也可能没有某些特征,其中一些可能是不好的,而我们不知道。虽然巨大的陨石撞击地球只会带来坏事,但软件和自动化都是工具,在过去 200 年里,自动化有时对人类来说是坏事,但大多数情况下,它是一件非常好的事情,我们应该想要更多。

Hence, I’ve already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didn’t explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel… will it get there? We have no equivalents here. We don’t know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!
因此,我已经用神学作为类比,但我更喜欢的类比是阿波罗计划。我们有重力理论和火箭工程理论。我们知道为什么火箭不会爆炸,如何对燃烧室中的压力进行建模,以及如果我们将它们增大 25% 会发生什么。我们知道他们为什么上升,以及他们需要走多远。你可以把土星火箭的规格交给艾萨克·牛顿,他可以做数学计算,至少在原则上:这么大的重量,这么大的推力,这么多的燃料……它能到达那里吗?我们这里没有类似的东西。我们不知道为什么 LLMs 有效,它们能达到多大,或者它们必须走多远。然而,我们不断地把它们做得更大,而且它们似乎确实越来越接近了。他们会到达那里吗?也许是吧!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (there’s an old English joke about a Frenchman who says ‘that’s all very well in practice, but does it work in theory’). Yet while we can, empirically, see the rocket going up, we don’t know how far away the moon is. We can’t plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth. 
在这个主题上,有些人认为我们正处于人工智能或通用人工智能的经验阶段:我们正在构建事物并进行观察,但不知道它们为什么起作用,而理论可以稍后出现,就像伽利略出现在牛顿之前一样(有一个古老的理论)一个关于一个法国人的英语笑话,他说“这在实践中一切都很好,但在理论上可行”)。然而,虽然我们可以凭经验看到火箭上升,但我们不知道月球距离有多远。我们无法将人和 ChatGPT 绘制在图表上,并画一条线来说明一个人何时会到达另一个人,即使只是推断当前的增长率。

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, here’s another magazine writer on unknown risks:
所有类比都有缺陷,当然,我的类比中的缺陷是,如果阿波罗计划出错,即使在理论上,其负面影响也不是人类的终结。在我祖父之前,这是另一位关于未知风险的杂志作家:


What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascals’s Wager! Anselm’s Proof!), but if you can’t know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know they’re real, we know they could destroy mankind, and they have no benefits at all (unless they’re very very small). And yet, we’re not really looking for them.
那么,对于真实但未知的风险,您的首选态度是什么?你更喜欢哪个思想实验?我们可以回到半被遗忘的本科生哲学(帕斯卡的赌注!安瑟姆的证明!),但如果你不知道,你会担心,还是耸耸肩?我们如何看待其他风险?陨石对于通用人工智能来说是一个糟糕的类比,因为我们知道它们是真实的,我们知道它们可以毁灭人类,而且它们根本没有任何好处(除非它们非常非常小)。然而,我们并不是真的在寻找它们。

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and can’t meet demand), but on a decade’s view the models will get more efficient and the chips will be everywhere. In the end, you can’t ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)
不过,假设你认为厄运者是对的:你能做什么?该技术原则上是公开的。开源模型正在激增。目前,LLMs需要大量昂贵的芯片(Nvidia 在过去 12 个月销售了 475 亿美元,无法满足需求),但从十年的角度来看,模型将变得更加高效,芯片将变得更加高效。到处。最后,你不能禁止数学。从几十年的范围来看,它无论如何都会发生。如果你必须用核裂变来类比,想象一下,如果我们发现了一种方法,任何人都可以用家用材料在车库里制造炸弹——祝你好运,避免这种情况发生。 (末日论者可能会回应说,这回答了费米悖论:在某个时刻,每个文明都创造了通用人工智能,并将它们变成了回形针。)

By default, though, this will follow all the other waves of AI, and become ‘just’ more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UK’s Post Office scandal reminds us that you don’t need AGI for software to ruin people’s lives. LLMs will produce more pain and more scandals, but life will go on. At least, that’s the answer I prefer myself.
不过,默认情况下,这将跟随所有其他人工智能浪潮,并“只是”更多的软件和更多的自动化。自动化总是会产生摩擦性的痛苦,回到勒德派,英国邮局丑闻提醒我们,你不需要通用人工智能来毁掉人们的生活。 LLMs会产生更多的痛苦和更多的丑闻,但生活还要继续。至少,这是我自己更喜欢的答案。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1680056.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

关于 vs2019 c++20 规范里的 STL 库里模板 decay_t<T>

(1) 这个模板,在库代码里非常常见。 decay 英文是“衰弱,消减” 的意思,大概能感觉到就是要简化模板参数 T 的类型,去掉其上的修饰符。因为常用且复杂,故单独列出其源码和注释。先举例其应用场景…

JumpServer堡垒机应用(v3.10.8) 下

目录 JumpServer堡垒机简单式部署与管理(v3.10.8) 上-CSDN博客 一. 资产管理 1.1创建资产 1.2 给资产主机创建用户 1.2.1 普通账户: 1.2.2 特权账户: 1.2.3 创建用户 二. 命令过滤 2.1 创建命令组 2.2 创建命令过滤 ​编辑 三. 创建资产授权 …

《Python编程从入门到实践》day29

# 昨日知识点回顾 修改折线图文字和线条粗细 矫正图形 使用内置格式 # 今日知识点学习 15.2.4 使用scatter()绘制散点图并设置样式 import matplotlib.pyplot as plt import matplotlib matplotlib.use(TkAgg)plt.style.use(seaborn-v0_8) # 使用内置格式 fig, ax plt.subpl…

软考中级-软件设计师 (十一)标准化和软件知识产权基础知识

一、标准化基础知识 1.1标准的分类 根据适用的范围分类: 国际标准指国际化标准组织(ISO)、国际电工委员会(IEC)所制定的标准,以及ISO所收录的其他国际组织制定的标准。 国家标准:中华人民共和…

单位个人如何向期刊投稿发表文章?

在单位担任信息宣传员一职以来,我深感肩上的责任重大。每月的对外信息宣传投稿不仅是工作的核心,更是衡量我们部门成效的重要指标。起初,我满腔热血,以为只要勤勉努力,将精心撰写的稿件投至各大报社、报纸期刊的官方邮箱,就能顺利登上版面,赢得读者的青睐。然而,现实远比理想骨…

Polylang Pro插件下载:多语言网站构建的终极解决方案

在全球化的今天,多语言网站已成为企业拓展国际市场的重要工具。然而,创建和管理一个多语言网站并非易事。幸运的是,Polylang Pro插件的出现,为WordPress用户提供了一个强大的多语言解决方案。本文将深入探讨Polylang Pro插件的功能…

基于物联网的教室人数检测系统-设计说明书

设计摘要: 本设计基于物联网技术,实现了一个教室人数检测系统。系统利用STM32单片机作为中控,通过红外对管检测人员进出教室,并实时统计应到人数和实到人数,同时使用OLED显示屏显示相关信息。系统还通过温湿度传感器检…

用SwitchHosts模拟本地域名解析访问

一.用SwitchHosts模拟本地域名解析访问 1.下载地址 https://download.csdn.net/download/jinhuding/89313168 2.使用截图

每周一算法:恰好经过K条边的最短路

题目描述 牛站 给定一张由 M M M 条边构成的无向图,点的编号为 1 ∼ 1000 1\sim 1000 1∼1000 之间的整数。 求从起点 S S S 到终点 E E E 恰好经过 K K K 条边(可以重复经过)的最短路。 注意: 数据保证一定有解。 输入格式 第 1 …

【动态规划五】回文串问题

目录 leetcode题目 一、回文子串 二、最长回文子串 三、分割回文串 IV 四、分割回文串 II 五、最长回文子序列 六、让字符串成为回文串的最少插入次数 leetcode题目 一、回文子串 647. 回文子串 - 力扣(LeetCode)https://leetcode.cn/problems/…

mysql的explain

explain可以用于select,delete,insert,update的statement。 当explain用于statement时,mysql将会给出其优化器(optimizer)的执行计划。 通过explain字段生成执行计划表。下面来解析这个执行计划表的每一列…

一种请求头引起的跨域问题记录(statusCode = 400/CORS)

问题表象 问题描述 当我们需要在接口的headers中添加一个自定义的变量的时候,前端的处理是直接在拦截器或者是接口配置的地方直接进行写,比如下面的这段比较基础的写法: $http({method: "post",url:constants.backend.SERVER_LOGIN…

Cache基本原理--以TC3xx为例(2)

目录 1.概述 2. Cache映射模式 3.DCache的数据一致性 4.小结 1.概述 上一篇Cache基本原理--以TC3xx为例(1)-CSDN博客,我们聊了Cache基本概念,接下来我们将继续聊Cache映射模式,DCache的数据一致性问题。 2. Cache映射模式 常见的Cache地…

Postman基础功能-前置脚本与接口关联

大家好,今天给大家分享一下关于 Postman 工具中的前置脚本与接口关联的使用,本文中汇大量用到关于变量的知识,前段时间给大家除了一篇文章分享,可以参考: Postman基础功能-变量设置与使用 一、前置脚本 介绍&#xf…

C++笔试强训day23

目录 1.打怪 2.字符串分类 3.城市群数量 1.打怪 链接 模拟题目&#xff0c;按题意进行模拟就行。 #include <iostream> using namespace std; // 简单模拟 int solve() {int h, a, H, A;cin >> h >> a >> H >> A;if (a > H)return -1;int…

bcb6 lib编程

Library 新建 Library 新建Cpp File File1.cpp extern "C" __declspec(dllexport) int add(int a,int b) {return ab;}Build Project->Build Project1 使用 新建项目 Add New Project Unit1.cpp #pragma hdrstop#include "Unit1.h" //---------…

上班族兼职新篇章:10大实战攻略,轻松年赚1-20万

对于众多上班族而言&#xff0c;如何在工作之余赚取额外收入&#xff0c;开启自己的第一份副业&#xff0c;已成为许多人心中的疑问。每个人的才能和兴趣点不尽相同&#xff0c;但都有机会找到适合自己的兼职方式。接下来&#xff0c;就让我们一起探索这10大实战攻略&#xff0…

es 分词器(五)之elasticsearch-analysis-jieba 8.7.0

es 分词器&#xff08;五&#xff09;之elasticsearch-analysis-jieba 8.7.0 今天咱们就来讲一下es jieba 8.7.0 分词器的实现&#xff0c;以及8.x其它版本的实现方式&#xff0c;如果想直接使用es 结巴8.x版本&#xff0c;请直接修改pom文件的elasticsearch.version版本号即可…

不用投稿邮箱,怎样向各大新闻媒体投稿?

身为单位的信息宣传员,我深知肩上责任重大。每个月,完成单位在媒体上投稿发表文章的考核任务,就如同一场无声的赛跑,既要保证速度,更要注重质量。起初,我遵循“前辈们”的老路,一头扎进了邮箱投稿的海洋。但很快,现实给了我一记重拳——邮箱投稿的竞争犹如千军万马过独木桥,稿件…

【MySQL数据库开发设计规范】之SQL使用规范

欢迎点开这篇文章&#xff0c;自我介绍一下哈&#xff0c;本人姑苏老陈 &#xff0c;是一名JAVA开发老兵。 本文收录于 《MySQL数据库开发设计规范》专栏中&#xff0c;该专栏主要分享一些关于MySQL数据库开发设计相关的技术规范文章&#xff0c;定期更新&#xff0c;欢迎关注&…