Chrome浏览器的自动播放限制策略(原文)

news2024/11/17 11:30:18

客户:为什么明明我设为自动播放了,却无法自动播放?
我:#@$%^#~
客户:为什么其他浏览器可以自动播放,Chrome浏览器不能自动播,您们产品有问题...
我:^$&*^(*&^(*%

好吧,我终于找到根源了,分享给大家:
----

摘录1:Chrome自动播放限制的考量

正如您可能已经注意到的,Web浏览器正在朝着更严格的自动播放策略发展,以改善用户体验,最大限度地减少安装广告拦截程序的动机,并减少昂贵的数据流量消耗。这些更改旨在为用户提供更大的播放控制权,并使拥有合法用例的视频发布者受益。

摘录2:Chrome的自动播放政策

  • 静音自动播放总是允许的。
  • 在下列情况下允许使用声音自动播放:
    • 用户已经与域进行了交互(点击,tap等)。
    • 在桌面上,用户的媒体参与指数阈值(MEI)已被越过,这意味着用户以前播放带有声音的视频。
    • 在移动设备上,用户已将该网站添加到主屏幕。
    • 顶部框架可以将自动播放权限授予其iframe以允许自动播放声音。

Autoplay policy in Chrome

Improved user experience, minimized incentives to install ad blockers, and reduced data consumption

Published on Wednesday, September 13, 2017 • Updated on Tuesday, May 25, 2021

François Beaufort

Dives into Chromium source code

GitHub

The Autoplay Policy launched in Chrome 66 for audio and video elements and is effectively blocking roughly half of unwanted media autoplays in Chrome. For the Web Audio API, the autoplay policy launched in Chrome 71. This affects web games, some WebRTC applications, and other web pages using audio features. More details can be found in the Web Audio API section below.

Chrome's autoplay policies changed in April of 2018 and I'm here to tell you why and how this affects video playback with sound. Spoiler alert: users are going to love it!

Internet memes tagged "autoplay" found on Imgflip and Imgur.

#New behaviors

As you may have noticed, web browsers are moving towards stricter autoplay policies in order to improve the user experience, minimize incentives to install ad blockers, and reduce data consumption on expensive and/or constrained networks. These changes are intended to give greater control of playback to users and to benefit publishers with legitimate use cases.

Chrome's autoplay policies are simple:

  • Muted autoplay is always allowed.
  • Autoplay with sound is allowed if:
    • The user has interacted with the domain (click, tap, etc.).
    • On desktop, the user's Media Engagement Index threshold has been crossed, meaning the user has previously played video with sound.
    • The user has added the site to their home screen on mobile or installed the PWA on desktop.
  • Top frames can delegate autoplay permission to their iframes to allow autoplay with sound.

#Media Engagement Index

The Media Engagement Index (MEI) measures an individual's propensity to consume media on a site. Chrome's approach is a ratio of visits to significant media playback events per origin:

  • Consumption of the media (audio/video) must be greater than seven seconds.
  • Audio must be present and unmuted.
  • The tab with the video is active.
  • Size of the video (in px) must be greater than 200x140.

From that, Chrome calculates a media engagement score, which is highest on sites where media is played on a regular basis. When it is high enough, media is allowed to autoplay on desktop only.

A user's MEI is available at the about://media-engagement internal page.

Screenshot of the about://media-engagement internal page in Chrome.

#Developer switches

As a developer, you may want to change Chrome autoplay policy behavior locally to test your website for different levels of user engagement.

  • You can disable the autoplay policy entirely by using a command line flag: chrome.exe --autoplay-policy=no-user-gesture-required. This allows you to test your website as if user were strongly engaged with your site and playback autoplay would be always allowed.

  • You can also decide to make sure autoplay is never allowed by disabling MEI and whether sites with the highest overall MEI get autoplay by default for new users. Do this with flags: chrome.exe --disable-features=PreloadMediaEngagementData, MediaEngagementBypassAutoplayPolicies.

#Iframe delegation

A permissions policy allows developers to selectively enable and disable browser features and APIs. Once an origin has received autoplay permission, it can delegate that permission to cross-origin iframes with the permissions policy for autoplay. Note that autoplay is allowed by default on same-origin iframes.

<!-- Autoplay is allowed. -->
<iframe src="https://cross-origin.com/myvideo.html" allow="autoplay">

<!-- Autoplay and Fullscreen are allowed. -->
<iframe src="https://cross-origin.com/myvideo.html" allow="autoplay; fullscreen">

When the permissions policy for autoplay is disabled, calls to play() without a user gesture will reject the promise with a NotAllowedError DOMException. And the autoplay attribute will also be ignored.

Warning

Older articles incorrectly recommend using the attribute gesture=media which is not supported.

#Examples

Example 1: Every time a user visits VideoSubscriptionSite.com on their laptop they watch a TV show or a movie. As their media engagement score is high, autoplay is allowed.

Example 2: GlobalNewsSite.com has both text and video content. Most users go to the site for text content and watch videos only occasionally. Users' media engagement score is low, so autoplay wouldn't be allowed if a user navigates directly from a social media page or search.

Example 3: LocalNewsSite.com has both text and video content. Most people enter the site through the homepage and then click on the news articles. Autoplay on the news article pages would be allowed because of user interaction with the domain. However, care should be taken to make sure users aren't surprised by autoplaying content.

Example 4: MyMovieReviewBlog.com embeds an iframe with a movie trailer to go with a review. Users interacted with the domain to get to the blog, so autoplay is allowed. However, the blog needs to explicitly delegate that privilege to the iframe in order for the content to autoplay.

#Chrome enterprise policies

It is possible to change the autoplay behavior with Chrome enterprise policies for use cases such as kiosks or unattended systems. Check out the Policy List help page to learn how to set the autoplay related enterprise policies:

  • The AutoplayAllowed policy controls whether autoplay is allowed or not.
  • The AutoplayAllowlist policy allows you to specify an allowlist of URL patterns where autoplay will always be enabled.

#Best practices for web developers

#Audio/Video elements

Here's the one thing to remember: Don't ever assume a video will play, and don't show a pause button when the video is not actually playing. It is so important that I'm going to write it one more time below for those who simply skim through that post.

Important

Don't assume a video will play, and don't show a pause button when the video is not actually playing.

You should always look at the Promise returned by the play function to see if it was rejected:

var promise = document.querySelector('video').play();

if (promise !== undefined) {
  promise.then(_ => {
    // Autoplay started!
  }).catch(error => {
    // Autoplay was prevented.
    // Show a "Play" button so that user can start playback.
  });
}

Caution

Don't play interstitial ads without showing any media controls as they may not autoplay and users will have no way of starting playback.

One cool way to engage users is to use muted autoplay and let them chose to unmute. (See the example below.) Some websites already do this effectively, including Facebook, Instagram, Twitter, and YouTube.

<video id="video" muted autoplay>
<button id="unmuteButton"></button>

<script>
  unmuteButton.addEventListener('click', function() {
    video.muted = false;
  });
</script>

Events that trigger user activation are still to be defined consistently across browsers. I'd recommend you stick to "click" for the time being then. See GitHub issue whatwg/html#3849.

#Web Audio

The Web Audio API has been covered by autoplay since Chrome 71. There are a few things to know about it. First, it is good practice to wait for a user interaction before starting audio playback so that users are aware of something happening. Think of a "play" button or "on/off" switch for instance. You can also add an "unmute" button depending on the flow of the app.

If an AudioContext is created before the document receives a user gesture, it will be created in the "suspended" state, and you will need to call resume() after the user gesture.

If you create your AudioContext on page load, you'll have to call resume() at some time after the user interacted with the page (e.g., after a user clicks a button). Alternatively, the AudioContext will be resumed after a user gesture if start() is called on any attached node.

// Existing code unchanged.
window.onload = function() {
  var context = new AudioContext();
  // Setup all nodes
  // ...
}

// One-liner to resume playback when user interacted with the page.
document.querySelector('button').addEventListener('click', function() {
  context.resume().then(() => {
    console.log('Playback resumed successfully');
  });
});

You may also create the AudioContext only when the user interacts with the page.

document.querySelector('button').addEventListener('click', function() {
  var context = new AudioContext();
  // Setup all nodes
  // ...
});

To detect whether the browser requires a user interaction to play audio, check AudioContext.state after you've created it. If playing is allowed, it should immediately switch to running. Otherwise it will be suspended. If you listen to the statechange event, you can detect changes asynchronously.

To see an example, check out the small Pull Request that fixes Web Audio playback for these autoplay policy rules for https://airhorner.com.

You can find a summary of Chrome's autoplay feature on the Chromium site.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/592719.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【CesiumJS入门】(0)专栏介绍&项目搭建

前言 开了一个新的专栏&#xff0c;叫【CesiumJS入门】&#xff0c;因为自己也是初学者&#xff0c;所以专栏主要是记录自己学习CesiumJS的过程&#xff0c;如果还能给后来者带来一点参考那就真是太好了。 本项目的仓库地址&#xff1a;https://gitee.com/cswwww/cesium-tyro…

Python入门教程+项目实战-13.1节-集合基础概念

目录 13.1.1 理解集合类型 13.1.2 集合的类型名 13.1.3 集合的定义 13.1.4 在循环中遍历集合 13.1.5 集合的元素输出顺序 13.1.6 知识要点 13.1.7 系统学习python 13.1.1 理解集合类型 集合类型与字典类型非常接近&#xff0c;Python中的集合类型也是用{}符号括住的一个…

Zemax Lumerical | 二维光栅出瞳扩展系统优化(下)

简介 本文提出并演示了一种以二维光栅耦出的光瞳扩展&#xff08;EPE&#xff09;系统优化和公差分析的仿真方法。 在这个工作流程中&#xff0c;我们将使用3个软件进行不同的工作 &#xff0c;以实现优化系统的大目标。首先&#xff0c;我们使用 Lumerical 构建光栅模型并使用…

封装设计!抽象BasePage,提升WEB自动化测试用例质量和效率

目录 前言&#xff1a; 一、什么是抽象BasePage 二、BasePage中的属性和方法 三、BasePage中的代码实现 四、抽象Page对象 五、测试用例 六、总结 前言&#xff1a; 对于测试工程师来说&#xff0c;WEB自动化测试是非常重要的一部分。然而&#xff0c;WEB自动化测试的开…

软件测试实用案例上机报告一

文章目录 一、上机内容&#xff08;单元测试&#xff09;二、简单计算机测试1、题目2、编码&#xff08;源代码&#xff09;3、HTMLTestRunner生成可视化报告4、coverage查看覆盖情况 三、基本覆盖路径法测试1、题目2、编码&#xff08;源代码&#xff09;3、复读 &#x1f414…

【构造+数论+Tree】CF1627C

Problem - 1627C - Codeforces 题意&#xff1a; 给定一棵树的形态&#xff0c;让你给这棵树的边赋值&#xff0c;使得每对相邻的边的边权和都是质数 思路&#xff1a; 一开始模拟了一下样例&#xff0c;Sample3告诉我们如果有三条边相邻就是无解&#xff08;可以猜的结论&a…

详解.NET IL代码

IL是什么&#xff1f; Intermediate Language &#xff08;IL&#xff09;微软中间语言 C#代码编译过程&#xff1f; C#源代码通过LC转为IL代码&#xff0c;IL主要包含一些元数据和中间语言指令&#xff1b; JIT编译器把IL代码转为机器识别的机器代码。如下图 语言编译器&am…

让代码创造童话,共建快乐世界

六一儿童节即将到来&#xff0c;小朋友们开心的笑容弥漫了整个城市。对于大多数孩子来说&#xff0c;六一儿童节意味着玩具和糖果。但尽管这些看起来微不足道&#xff0c;却是他们幼小而纯真心灵的欢笑。而心怀童真的大人们则用他们手中的代码&#xff0c;创造出一个快乐而幸福…

CI858K01 3BSE018135R1 简化数据的编译

CI858K01 3BSE018135R1系列由以下部分组成: em4远程:完全连接到安全基础设施em4警报:能够发送短信或电子邮件警报em4本地:为不需要远程通信或只需要局域网的应用而设计 CI858K01 3BSE018135R1 remote的主要优势是可以连接到互联网&#xff0c;从而提供编程、监控和数据记录功…

怎么把图片转换成pdf格式?

怎么把图片转换成pdf格式&#xff1f;PDF 文件是一种非常流行的文件格式&#xff0c;几乎所有的电脑都自带了 PDF 阅读器工具。将图片转换成 PDF 格式后&#xff0c;在任何电脑或移动设备上&#xff0c;都能轻易地查看、共享和传输&#xff0c;兼容性更好。PDF 文件可以加密保护…

Zotero之多篇文献引用

前提 需看下我的这篇&#xff1a;Zotero文献在word中的引用 具体操作 Step01 在Word中引用多篇文献 点击“Add/Edit Citation”在跳出的Zotero搜索框中&#xff0c;选择“经典视图”在跳出的“添加/编辑引注”界面中&#xff0c;点击“多重来源”&#xff08;单一来源&…

MFC (四) 处理文本

默认消息的处理方法 1.确认什么消息 2.添加消息处理函数 3.添加代码 默认消息指在消息定义中已存在的消息 这里我们对文本做换行处理 1.我们在ondraw里修改代码&#xff0c;这样&#xff0c;无论放大&#xff0c;缩小&#xff0c;都不会改变文本 void CMFCpaintView::On…

基于博客系统的测试

目录 1.测试用例 2.编写代码 2.1InitAndEnd 2.2BlogCases 编写测试用例 2.2.1.登录 2.2.2博客列表页 2.2.3写博客 2.2.4博客详情页校验 2.2.5写博客后,校验博客列表页 2.2.6删除刚才测试发布的博客 2.2.7注销 1.测试用例 2.编写代码 创建两个类 2.1InitAndEnd 用于…

【矩池云】YOLOv3~YOLOv5训练红外小目标数据集

一、数据集准备 数据集下载地址&#xff1a;https://github.com/YimianDai/sirst 1. 需要将数据集转换为YOLO所需要的txt格式 参考链接&#xff1a;https://github.com/pprp/voc2007_for_yolo_torch 1.1 检测图片及其xml文件 import os, shutildef checkPngXml(dir1, dir2…

OceanBase并行执行中 DTL消息接收处理的逻辑

OceanBase 并行执行的消息处理框架是很有意思的&#xff0c;里面用到了不少面向对象编程思想&#xff0c;值得分析。 DTL 从宏观上看可以分为三大部分&#xff1a; DTL 消息发送DTL 消息缓存DTL 消息处理 本文介绍 DTL 消息处理。 核心组件 DTL 消息缓冲区 DTL 消息缓冲区…

不懂这10个命令,别说你会调试网络设备

我的网工朋友大家好啊 好久没跟你们聊思科设备了。 虽然目前大方向上&#xff0c;企业用的设备越来越偏向国产化&#xff0c;学习华为、华三等厂商知识的人也越来越多。 但不可否认的是&#xff0c;思科仍然是厂商老大哥。 交换机、路由器这两块&#xff0c;思科占的全球市场…

若依框架请求magic-api接口出现Uncaught (in promise) error Promise.then (async)

错误描述&#xff1a; 在若依前端向magic-api发请求&#xff0c;实际上收到了返回的数据&#xff0c;但是仍出现错误提示。 错误截图&#xff1a; 猜测的原因&#xff1a; 请求参数错误返回参数错误magic-api内部语法错误 排除原因 在magic-api中仅返回一个数&#xff0c;同…

ChatGPT浪潮席卷,维智科技以时空AI赋能数实融合的未来城市

作者 | 伍杏玲 出品 | CSDN 每个时代都有新的技术浪潮&#xff0c;但在短短两年时间里见证两项颠覆全球的技术发展&#xff0c;实在出人意料之外&#xff1a;2021年以来&#xff0c;元宇宙成为互联网产业新风口&#xff0c;今年ChatGPT成为IT圈“顶流”&#xff0c;这两者为地…

Java访问QingCloud青云QingStor对象存储(公有云、私有云)

一、参考API 官网SDK文档参考&#xff1a;Java SDK - 公有云文档中心 (qingcloud.com) 二、环境说明 公有云跟私有云区别&#xff1a; 使用公有云QingStor&#xff0c;直接按照官网sdk直接可对接&#xff0c;私有云QingStor的话&#xff0c;需要设置具体的私有云请求地址及z…

chatgpt赋能python:Python中构造函数的名称

Python中构造函数的名称 作为一名有10年Python编程经验的工程师&#xff0c;我深知Python语言中构造函数的重要性。在本文中&#xff0c;我将着重介绍Python中构造函数的名称&#xff0c;并阐述其在Python编程中的作用。 什么是构造函数&#xff1f; 构造函数是一种特殊类型…