Media.
This specification is an experimental breakup of the HTML specification. You can see the full list on the index page and take part in the discussion in the repository.
video
elementcontrols
attribute: Interactive content.src
attribute:
zero or more track
elements, then
transparent, but with no media element descendants.src
attribute: zero or more source
elements, then
zero or more track
elements, then
transparent, but with no media element descendants.src
— Address of the resourcecrossorigin
— How the element handles crossorigin requestsposter
— Poster frame to show prior to video playbackpreload
— Hints how much buffering the media resource will likely needautoplay
— Hint that the media resource can be started automatically when the page is loadedmediagroup
— Groups media elements together with an implicit MediaController
loop
— Whether to loop the media resourcemuted
— Whether to mute the media resource by defaultcontrols
— Show user agent controlswidth
— Horizontal dimensionheight
— Vertical dimensionapplication
.aria-*
attributes
applicable to the allowed roles.interface HTMLVideoElement : HTMLMediaElement { attribute unsigned long width; attribute unsigned long height; readonly attribute unsigned long videoWidth; readonly attribute unsigned long videoHeight; attribute DOMString poster; };
A video
element is used for playing videos or movies, and audio files with
captions.
Content may be provided inside the video
element. User agents
should not show this content to the user; it is intended for older Web browsers which do
not support video
, so that legacy video plugins can be tried, or to show text to the
users of these older browsers informing them of how to access the video contents.
In particular, this content is not intended to address accessibility concerns. To
make video content accessible to the partially sighted, the blind, the hard-of-hearing, the deaf,
and those with other physical or cognitive disabilities, a variety of features are available.
Captions can be provided, either embedded in the video stream or as external files using the
track
element. Sign-language tracks can be provided, again either embedded in the
video stream or by synchronising multiple video
elements using the mediagroup
attribute or a MediaController
object. Audio descriptions can be provided, either as a separate track embedded in the video
stream, or a separate audio track in an audio
element slaved to the same controller as the video
element(s), or in text
form using a WebVTT file referenced using the track
element and
synthesized into speech by the user agent. WebVTT can also be used to provide chapter titles. For
users who would rather not use a media element at all, transcripts or other textual alternatives
can be provided by simply linking to them in the prose near the video
element. [[!WEBVTT]]
The video
element is a media element whose media data is
ostensibly video data, possibly with associated audio data.
The src
, preload
,
autoplay
, mediagroup
, loop
, muted
, and controls
attributes are the attributes common to all media
elements.
The poster
attribute gives the address of an
image file that the user agent can show while no video data is available. The attribute, if
present, must contain a valid non-empty URL potentially surrounded by spaces.
If the specified resource is to be used, then, when the element is created or when the poster
attribute is set, changed, or removed, the user agent must
run the following steps to determine the element's poster frame (regardless of the
value of the element's show poster flag):
If there is an existing instance of this algorithm running for this video
element, abort that instance of this algorithm without changing the poster
frame.
If the poster
attribute's value is the empty string
or if the attribute is absent, then there is no poster frame; abort these
steps.
Resolve the poster
attribute's value relative to the element. If this fails,
then there is no poster frame; abort these steps.
Fetch the resulting absolute URL, from the element's node document's origin. This must delay the load event of the element's node document.
If an image is thus obtained, the poster frame is that image. Otherwise, there is no poster frame.
The image given by the poster
attribute,
the poster frame, is intended to be a representative frame of the
video (typically one of the first non-blank frames) that gives the user an idea of what the video
is like.
A video
element represents what is given for the first matching condition in the
list below:
readyState
attribute is either HAVE_NOTHING
, or HAVE_METADATA
but no video data has yet been obtained at
all, or the element's readyState
attribute is any
subsequent value but the media resource does not have a video channel)video
element represents its poster frame, if any,
or else transparent black with no intrinsic dimensions.video
element is paused, the current playback position is the first frame of video,
and the element's show poster flag is setvideo
element represents its poster frame, if any,
or else the first frame of the video.video
element is paused, and the
frame of video corresponding to the current playback
position is not available (e.g. because the video is seeking or buffering)video
element is neither potentially playing nor paused (e.g. when seeking or stalled)video
element represents the last frame of the video to have
been rendered.video
element is pausedvideo
element represents the frame of video corresponding to
the current playback position.video
element has a video channel and is potentially
playing)video
element represents the frame of video at the continuously
increasing "current" position. When the
current playback position changes such that the last frame rendered is no longer the
frame corresponding to the current playback position in the video, the new frame
must be rendered.Frames of video must be obtained from the video track that was selected when the event loop last reached step 1.
Which frame in a video stream corresponds to a particular playback position is defined by the video stream's format.
The video
element also represents any text track cues whose text track cue active flag is set and whose
text track is in the showing mode, and any
audio from the media resource, at the current playback position.
Any audio associated with the media resource must, if played, be played synchronised with the current playback position, at the element's effective media volume. The user agent must play the audio from audio tracks that were enabled when the event loop last reached step 1.
In addition to the above, the user agent may provide messages to the user (such as "buffering", "no video loaded", "error", or more detailed information) by overlaying text or icons on the video or other areas of the element's playback area, or in another appropriate manner.
User agents that cannot render the video may instead make the element represent a link to an external video playback utility or to the video data itself.
When a video
element's media resource has a video channel, the
element provides a paint source whose width is the media resource's
intrinsic width, whose height is the
media resource's intrinsic
height, and whose appearance is the frame of video corresponding to the current playback position, if that is available, or else
(e.g. when the video is seeking or buffering) its previous appearance, if any, or else (e.g.
because the video is still loading the first frame) blackness.
videoWidth
videoHeight
These attributes return the intrinsic dimensions of the video, or zero if the dimensions are not known.
The intrinsic width and intrinsic height of the media resource are the dimensions of the resource in CSS pixels after taking into account the resource's dimensions, aspect ratio, clean aperture, resolution, and so forth, as defined for the format used by the resource. If an anamorphic format does not define how to apply the aspect ratio to the video data's dimensions to obtain the "correct" dimensions, then the user agent must apply the ratio by increasing one dimension and leaving the other unchanged.
The videoWidth
IDL attribute must return
the intrinsic width of the video in CSS pixels.
The videoHeight
IDL attribute must return
the intrinsic height of the video in CSS
pixels. If the element's readyState
attribute is HAVE_NOTHING
, then the attributes must return 0.
Whenever the intrinsic width
or intrinsic height of the video changes
(including, for example, because the selected video
track was changed), if the element's readyState
attribute is not HAVE_NOTHING
, the user agent must
queue a task to fire a simple event named resize
at the media element.
The video
element supports dimension attributes.
In the absence of style rules to the contrary, video content should be rendered inside the element's playback area such that the video content is shown centered in the playback area at the largest possible size that fits completely within it, with the video content's aspect ratio being preserved. Thus, if the aspect ratio of the playback area does not match the aspect ratio of the video, the video will be shown letterboxed or pillarboxed. Areas of the element's playback area that do not contain the video represent nothing.
In user agents that implement CSS, the above requirement can be implemented by using the style rule suggested in the rendering section.
The intrinsic width of a video
element's playback area is the intrinsic width of
the poster frame, if that is available and the element currently
represents its poster frame; otherwise, it is the intrinsic width of the video resource, if that is
available; otherwise the intrinsic width is missing.
The intrinsic height of a video
element's playback area is the intrinsic height of
the poster frame, if that is available and the element currently
represents its poster frame; otherwise it is the intrinsic height of the video resource, if that is
available; otherwise the intrinsic height is missing.
The default object size is a width of 300 CSS pixels and a height of 150 CSS pixels. [[!CSSIMAGES]]
User agents should provide controls to enable or disable the display of closed captions, audio description tracks, and other additional data associated with the video stream, though such features should, again, not interfere with the page's normal rendering.
User agents may allow users to view the video content in manners more suitable to the user
(e.g. full-screen or in an independent resizable window). As for the other user interface
features, controls to enable this should not interfere with the page's normal rendering unless the
user agent is exposing a user interface.
In such an independent context, however, user agents may make full user interfaces visible, with,
e.g., play, pause, seeking, and volume controls, even if the controls
attribute is absent.
User agents may allow video playback to affect system features that could interfere with the user's experience; for example, user agents could disable screensavers while video playback is in progress.
The poster
IDL attribute must
reflect the poster
content attribute.
This example shows how to detect when a video has failed to play correctly:
<script> function failed(e) { // video playback failed - show a message saying why switch (e.target.error.code) { case e.target.error.MEDIA_ERR_ABORTED: alert('You aborted the video playback.'); break; case e.target.error.MEDIA_ERR_NETWORK: alert('A network error caused the video download to fail part-way.'); break; case e.target.error.MEDIA_ERR_DECODE: alert('The video playback was aborted due to a corruption problem or because the video used features your browser did not support.'); break; case e.target.error.MEDIA_ERR_SRC_NOT_SUPPORTED: alert('The video could not be loaded, either because the server or network failed or because the format is not supported.'); break; default: alert('An unknown error occurred.'); break; } } </script> <p><video src="tgif.vid" autoplay controls onerror="failed(event)"></video></p> <p><a href="tgif.vid">Download the video file</a>.</p>
audio
elementcontrols
attribute: Interactive content.controls
attribute: Palpable content.src
attribute:
zero or more track
elements, then
transparent, but with no media element descendants.src
attribute: zero or more source
elements, then
zero or more track
elements, then
transparent, but with no media element descendants.src
— Address of the resourcecrossorigin
— How the element handles crossorigin requestspreload
— Hints how much buffering the media resource will likely needautoplay
— Hint that the media resource can be started automatically when the page is loadedmediagroup
— Groups media elements together with an implicit MediaController
loop
— Whether to loop the media resourcemuted
— Whether to mute the media resource by defaultcontrols
— Show user agent controlsapplication
.aria-*
attributes
applicable to the allowed roles.[NamedConstructor=Audio(optional DOMString src)] interface HTMLAudioElement : HTMLMediaElement {};
An audio
element represents a sound or audio stream.
Content may be provided inside the audio
element. User agents
should not show this content to the user; it is intended for older Web browsers which do
not support audio
, so that legacy audio plugins can be tried, or to show text to the
users of these older browsers informing them of how to access the audio contents.
In particular, this content is not intended to address accessibility concerns. To
make audio content accessible to the deaf or to those with other physical or cognitive
disabilities, a variety of features are available. If captions or a sign language video are
available, the video
element can be used instead of the audio
element to
play the audio, allowing users to enable the visual alternatives. Chapter titles can be provided
to aid navigation, using the track
element and a WebVTT file. And,
naturally, transcripts or other textual alternatives can be provided by simply linking to them in
the prose near the audio
element. [[!WEBVTT]]
The audio
element is a media element whose media data is
ostensibly audio data.
The src
, preload
,
autoplay
, mediagroup
, loop
, muted
, and controls
attributes are the attributes common to all media
elements.
When an audio
element is potentially playing, it must have its audio
data played synchronised with the current playback position, at the element's
effective media volume. The user agent must play the audio from audio tracks that
were enabled when the event loop last reached step 1.
When an audio
element is not potentially playing, audio must not play
for the element.
Audio
( [ url ] )Returns a new audio
element, with the src
attribute set to the value passed in the argument, if applicable.
A constructor is provided for creating HTMLAudioElement
objects (in addition to
the factory methods from DOM such as createElement()
): Audio(src)
. When invoked as a
constructor, it must return a new HTMLAudioElement
object (a new audio
element). The element must be created with its preload
attribute set to the literal value "auto
". If the
src argument is present, the object created must be created with its src
content attribute set to the provided value (this will cause the user agent to invoke the object's
resource selection algorithm before returning).
The element's node document must be the active document of the browsing
context of the Window
object on which the interface object of the invoked
constructor is found.
source
elementtrack
elements.src
— Address of the resourcetype
— Type of embedded resourceinterface HTMLSourceElement : HTMLElement { attribute DOMString src; attribute DOMString type; // also has obsolete members };
The source
element allows authors to specify multiple alternative media resources for media
elements. It does not represent anything on its own.
The src
attribute gives the address of the
media resource. The value must be a valid non-empty URL potentially surrounded
by spaces. This attribute must be present.
Dynamically modifying a source
element and its attribute when the
element is already inserted in a video
or audio
element will have no
effect. To change what is playing, just use the src
attribute
on the media element directly, possibly making use of the canPlayType()
method to pick from amongst available
resources. Generally, manipulating source
elements manually after the document has
been parsed is an unnecessarily complicated approach.
The type
attribute gives the type of the
media resource, to help the user agent determine if it can play this media
resource before fetching it. If specified, its value must be a valid MIME
type. The codecs
parameter, which certain MIME types define, might be
necessary to specify exactly how the resource is encoded. [[!RFC6381]]
The following list shows some examples of how to use the codecs=
MIME
parameter in the type
attribute.
<source src='video.mp4' type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'>
<source src='video.mp4' type='video/mp4; codecs="avc1.58A01E, mp4a.40.2"'>
<source src='video.mp4' type='video/mp4; codecs="avc1.4D401E, mp4a.40.2"'>
<source src='video.mp4' type='video/mp4; codecs="avc1.64001E, mp4a.40.2"'>
<source src='video.mp4' type='video/mp4; codecs="mp4v.20.8, mp4a.40.2"'>
<source src='video.mp4' type='video/mp4; codecs="mp4v.20.240, mp4a.40.2"'>
<source src='video.3gp' type='video/3gpp; codecs="mp4v.20.8, samr"'>
<source src='video.ogv' type='video/ogg; codecs="theora, vorbis"'>
<source src='video.ogv' type='video/ogg; codecs="theora, speex"'>
<source src='audio.ogg' type='audio/ogg; codecs=vorbis'>
<source src='audio.spx' type='audio/ogg; codecs=speex'>
<source src='audio.oga' type='audio/ogg; codecs=flac'>
<source src='video.ogv' type='video/ogg; codecs="dirac, vorbis"'>
If a source
element is inserted as a child of a media element that
has no src
attribute and whose networkState
has the value NETWORK_EMPTY
, the user agent must invoke the media
element's resource selection
algorithm.
The IDL attributes src
and type
must reflect the respective content
attributes of the same name.
If the author isn't sure if user agents will all be able to render the media resources
provided, the author can listen to the error
event on the last
source
element and trigger fallback behaviour:
<script> function fallback(video) { // replace <video> with its contents while (video.hasChildNodes()) { if (video.firstChild instanceof HTMLSourceElement) video.removeChild(video.firstChild); else video.parentNode.insertBefore(video.firstChild, video); } video.parentNode.removeChild(video); } </script> <video controls autoplay> <source src='video.mp4' type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'> <source src='video.ogv' type='video/ogg; codecs="theora, vorbis"' onerror="fallback(parentNode)"> ... </video>
track
elementkind
— The type of text tracksrc
— Address of the resourcesrclang
— Language of the text tracklabel
— User-visible labeldefault
— Enable the track if no other text track is more suitableinterface HTMLTrackElement : HTMLElement { attribute DOMString kind; attribute DOMString src; attribute DOMString srclang; attribute DOMString label; attribute boolean default; const unsigned short NONE = 0; const unsigned short LOADING = 1; const unsigned short LOADED = 2; const unsigned short ERROR = 3; readonly attribute unsigned short readyState; readonly attribute TextTrack track; };
The track
element allows authors to specify explicit external timed text tracks for media elements. It
does not represent anything on its own.
The kind
attribute is an enumerated
attribute. The following table lists the keywords defined for this attribute. The keyword
given in the first cell of each row maps to the state given in the second cell.
Keyword | State | Brief description |
---|---|---|
subtitles
| Subtitles | Transcription or translation of the dialogue, suitable for when the sound is available but not understood (e.g. because the user does not understand the language of the media resource's audio track). Overlaid on the video. |
captions
| Captions | Transcription or translation of the dialogue, sound effects, relevant musical cues, and other relevant audio information, suitable for when sound is unavailable or not clearly audible (e.g. because it is muted, drowned-out by ambient noise, or because the user is deaf). Overlaid on the video; labeled as appropriate for the hard-of-hearing. |
descriptions
| Descriptions | Textual descriptions of the video component of the media resource, intended for audio synthesis when the visual component is obscured, unavailable, or not usable (e.g. because the user is interacting with the application without a screen while driving, or because the user is blind). Synthesized as audio. |
chapters
| Chapters | Chapter titles, intended to be used for navigating the media resource. Displayed as an interactive (potentially nested) list in the user agent's interface. |
metadata
| Metadata | Tracks intended for use from script. Not displayed by the user agent. |
The attribute may be omitted. The missing value default is the subtitles state.
The src
attribute gives the address of the text
track data. The value must be a valid non-empty URL potentially surrounded by spaces.
This attribute must be present.
If the element has a src
attribute whose value is not the
empty string and whose value, when the attribute was set, could be successfully resolved relative to the element, then the element's track
URL is the resulting absolute URL. Otherwise, the element's track
URL is the empty string.
If the element's track URL identifies a WebVTT resource, and the element's kind
attribute is not in the metadata state, then the WebVTT file must be a
WebVTT file using cue text. [[!WEBVTT]]
Furthermore, if the element's track URL identifies a WebVTT resource, and the
element's kind
attribute is in the chapters state, then the WebVTT file must be both a
WebVTT file using chapter title text and a WebVTT file using only nested
cues. [[!WEBVTT]]
The srclang
attribute gives the language of
the text track data. The value must be a valid BCP 47 language tag. This attribute must be present
if the element's kind
attribute is in the subtitles state. [[!BCP47]]
If the element has a srclang
attribute whose value is
not the empty string, then the element's track language is the value of the attribute.
Otherwise, the element has no track language.
The label
attribute gives a user-readable
title for the track. This title is used by user agents when listing subtitle, caption, and audio description tracks in their user interface.
The value of the label
attribute, if the attribute is
present, must not be the empty string. Furthermore, there must not be two track
element children of the same media element whose kind
attributes are in the same state, whose srclang
attributes are both missing or have values that
represent the same language, and whose label
attributes are
again both missing or both have the same value.
If the element has a label
attribute whose value is not
the empty string, then the element's track label is the value of the attribute.
Otherwise, the element's track label is an empty string.
The default
attribute is a boolean
attribute, which, if specified, indicates that the track is to be enabled if the user's
preferences do not indicate that another track would be more appropriate.
Each media element must have no more than one track
element child
whose kind
attribute is in the subtitles or captions state and whose default
attribute is specified.
Each media element must have no more than one track
element child
whose kind
attribute is in the description state and whose default
attribute is specified.
Each media element must have no more than one track
element child
whose kind
attribute is in the chapters state and whose default
attribute is specified.
There is no limit on the number of track
elements whose kind
attribute is in the metadata state and whose default
attribute is specified.
readyState
Returns the text track readiness state, represented by a number from the following list:
NONE
(0)The text track not loaded state.
LOADING
(1)The text track loading state.
LOADED
(2)The text track loaded state.
ERROR
(3)The text track failed to load state.
track
Returns the TextTrack
object corresponding to the text track of the track
element.
The readyState
attribute must return the
numeric value corresponding to the text track readiness state of the
track
element's text track, as defined by the following list:
NONE
(numeric value 0)LOADING
(numeric value 1)LOADED
(numeric value 2)ERROR
(numeric value 3)The track
IDL attribute must, on getting,
return the track
element's text track's corresponding
TextTrack
object.
The src
, srclang
, label
, and default
IDL attributes must reflect the
respective content attributes of the same name. The kind
IDL attribute must reflect the content
attribute of the same name, limited to only known values.
This video has subtitles in several languages:
<video src="brave.webm"> <track kind=subtitles src=brave.en.vtt srclang=en label="English"> <track kind=captions src=brave.en.hoh.vtt srclang=en label="English for the Hard of Hearing"> <track kind=subtitles src=brave.fr.vtt srclang=fr lang=fr label="Français"> <track kind=subtitles src=brave.de.vtt srclang=de lang=de label="Deutsch"> </video>
(The lang
attributes on the last two describe the language of
the label
attribute, not the language of the subtitles
themselves. The language of the subtitles is given by the srclang
attribute.)
Media elements (audio
and video
, in
this specification) implement the following interface:
enum CanPlayTypeResult { "" /* empty string */, "maybe", "probably" }; typedef (MediaStream or MediaSource or Blob) MediaProvider; interface HTMLMediaElement : HTMLElement { // error state readonly attribute MediaError? error; // network state attribute DOMString src; attribute MediaProvider? srcObject; readonly attribute DOMString currentSrc; attribute DOMString? crossOrigin; const unsigned short NETWORK_EMPTY = 0; const unsigned short NETWORK_IDLE = 1; const unsigned short NETWORK_LOADING = 2; const unsigned short NETWORK_NO_SOURCE = 3; readonly attribute unsigned short networkState; attribute DOMString preload; readonly attribute TimeRanges buffered; void load(); CanPlayTypeResult canPlayType(DOMString type); // ready state const unsigned short HAVE_NOTHING = 0; const unsigned short HAVE_METADATA = 1; const unsigned short HAVE_CURRENT_DATA = 2; const unsigned short HAVE_FUTURE_DATA = 3; const unsigned short HAVE_ENOUGH_DATA = 4; readonly attribute unsigned short readyState; readonly attribute boolean seeking; // playback state attribute double currentTime; void fastSeek(double time); readonly attribute unrestricted double duration; Date getStartDate(); readonly attribute boolean paused; attribute double defaultPlaybackRate; attribute double playbackRate; readonly attribute TimeRanges played; readonly attribute TimeRanges seekable; readonly attribute boolean ended; attribute boolean autoplay; attribute boolean loop; void play(); void pause(); // media controller attribute DOMString mediaGroup; attribute MediaController? controller; // controls attribute boolean controls; attribute double volume; attribute boolean muted; attribute boolean defaultMuted; // tracks [SameObject] readonly attribute AudioTrackList audioTracks; [SameObject] readonly attribute VideoTrackList videoTracks; [SameObject] readonly attribute TextTrackList textTracks; TextTrack addTextTrack(TextTrackKind kind, optional DOMString label = "", optional DOMString language = ""); };
The media element attributes, src
, crossorigin
, preload
, autoplay
,
mediagroup
, loop
,
muted
, and controls
, apply to all media
elements. They are defined in this section.
Media elements are used to present audio data, or video and audio data, to the user. This is referred to as media data in this section, since this section applies equally to media elements for audio or for video. The term media resource is used to refer to the complete set of media data, e.g. the complete video file, or complete audio file.
A media resource can have multiple audio and video tracks. For the purposes of a
media element, the video data of the media resource is only that of the
currently selected track (if any) as given by the element's videoTracks
attribute when the event loop last
reached step 1, and the audio data of the media resource is the result of mixing all
the currently enabled tracks (if any) given by the element's audioTracks
attribute when the event loop last
reached step 1.
Both audio
and video
elements can be used for both audio
and video. The main difference between the two is simply that the audio
element has
no playback area for visual content (such as video or captions), whereas the video
element does.
Except where otherwise explicitly specified, the task source for all the tasks queued in this section and its subsections is the media element event task source of the media element in question.
error
Returns a MediaError
object representing the current error state of the
element.
Returns null if there is no error.
All media elements have an associated error status, which
records the last error the element encountered since its resource selection algorithm was last invoked. The
error
attribute, on getting, must return the
MediaError
object created for this last error, or null if there has not been an
error.
interface MediaError { const unsigned short MEDIA_ERR_ABORTED = 1; const unsigned short MEDIA_ERR_NETWORK = 2; const unsigned short MEDIA_ERR_DECODE = 3; const unsigned short MEDIA_ERR_SRC_NOT_SUPPORTED = 4; readonly attribute unsigned short code; };
error
. code
Returns the current error's error code, from the list below.
The code
attribute of a
MediaError
object must return the code for the error, which must be one of the
following:
MEDIA_ERR_ABORTED
(numeric value 1)MEDIA_ERR_NETWORK
(numeric value 2)MEDIA_ERR_DECODE
(numeric value 3)MEDIA_ERR_SRC_NOT_SUPPORTED
(numeric value 4)src
attribute or assigned media provider object was not suitable.The src
content attribute on media elements gives the address of the media resource (video, audio) to show. The
attribute, if present, must contain a valid non-empty URL potentially surrounded by
spaces.
If the itemprop
attribute is specified on the media
element, then the src
attribute must also be
specified.
The crossorigin
content attribute on
media elements is a CORS settings attribute.
If a media element is created with a
src
attribute, the user agent must immediately invoke the
media element's resource selection
algorithm.
If a src
attribute of a media element is set
or changed, the user agent must invoke the media element's media element load
algorithm. (Removing the src
attribute does
not do this, even if there are source
elements present.)
The src
IDL attribute on media elements must reflect the content attribute of the same
name.
The crossOrigin
IDL attribute must
reflect the crossorigin
content attribute.
A media provider object is an object that can represent a media resource,
separate from a URL. MediaStream
objects, MediaSource
objects, Blob
objects, and File
objects are all media provider objects.
Each media element can have an assigned media provider object, which is a media provider object. When a media element is created, it has no assigned media provider object.
srcObject
[ = source ]Allows the media element to be assigned a media provider object.
currentSrc
Returns the address of the current media resource, if any.
Returns the empty string when there is no media resource, or it doesn't have an address.
The currentSrc
IDL attribute must initially be set to
the empty string. Its value is changed by the resource
selection algorithm defined below.
The srcObject
IDL attribute, on getting,
must return the element's assigned media provider object, if any, or null otherwise.
On setting, it must set the element's assigned media provider object to the new
value, and then invoke the element's media element load algorithm.
There are three ways to specify a media resource: the srcObject
IDL attribute, the src
content attribute, and source
elements. The IDL
attribute takes priority, followed by the content attribute, followed by the elements.
A media resource can be described in terms of its type, specifically a
MIME type, in some cases with a codecs
parameter. (Whether the
codecs
parameter is allowed or not depends on the MIME type.) [[!RFC6381]]
Types are usually somewhat incomplete descriptions; for example "video/mpeg
" doesn't say anything except what the container type is, and even a
type like "video/mp4; codecs="avc1.42E01E, mp4a.40.2"
" doesn't
include information like the actual bitrate (only the maximum bitrate). Thus, given a type, a user
agent can often only know whether it might be able to play media of that type (with
varying levels of confidence), or whether it definitely cannot play media of that
type.
A type that the user agent knows it cannot render is one that describes a resource that the user agent definitely does not support, for example because it doesn't recognise the container type, or it doesn't support the listed codecs.
The MIME type "application/octet-stream
" with no parameters is never
a type that the user agent knows it cannot render. User agents must treat that type
as equivalent to the lack of any explicit Content-Type metadata
when it is used to label a potential media resource.
Only the MIME type "application/octet-stream
" with no
parameters is special-cased here; if any parameter appears with it, it will be treated just like
any other MIME type. This is a deviation from the rule that unknown MIME type parameters should be ignored.
canPlayType
(type)Returns the empty string (a negative response), "maybe", or "probably" based on how confident the user agent is that it can play media resources of the given type.
The canPlayType(type)
method must return the
empty string if type is a type that the user agent knows it cannot
render or is the type "application/octet-stream
"; it must return "probably
" if the user agent is confident
that the type represents a media resource that it can render if used in with this
audio
or video
element; and it must return "maybe
" otherwise. Implementors are encouraged
to return "maybe
" unless the type can be
confidently established as being supported or not. Generally, a user agent should never return
"probably
" for a type that allows the codecs
parameter if that parameter is not present.
This script tests to see if the user agent supports a (fictional) new format to dynamically
decide whether to use a video
element or a plugin:
<section id="video"> <p><a href="playing-cats.nfv">Download video</a></p> </section> <script> var videoSection = document.getElementById('video'); var videoElement = document.createElement('video'); var support = videoElement.canPlayType('video/x-new-fictional-format;codecs="kittens,bunnies"'); if (support != "probably" && "New Fictional Video Plugin" in navigator.plugins) { // not confident of browser support // but we have a plugin // so use plugin instead videoElement = document.createElement("embed"); } else if (support == "") { // no support from browser and no plugin // do nothing videoElement = null; } if (videoElement) { while (videoSection.hasChildNodes()) videoSection.removeChild(videoSection.firstChild); videoElement.setAttribute("src", "playing-cats.nfv"); videoSection.appendChild(videoElement); } </script>
The type
attribute of the
source
element allows the user agent to avoid downloading resources that use formats
it cannot render.
networkState
Returns the current state of network activity for the element, from the codes in the list below.
As media elements interact with the network, their current
network activity is represented by the networkState
attribute. On getting, it must
return the current network state of the element, which must be one of the following values:
NETWORK_EMPTY
(numeric value 0)NETWORK_IDLE
(numeric value 1)NETWORK_LOADING
(numeric value 2)NETWORK_NO_SOURCE
(numeric value 3)The resource selection algorithm defined
below describes exactly when the networkState
attribute changes value and what events fire to indicate changes in this state.
load
()Causes the element to reset and start selecting and loading a new media resource from scratch.
All media elements have an autoplaying flag, which must begin in the true state, and a delaying-the-load-event flag, which must begin in the false state. While the delaying-the-load-event flag is true, the element must delay the load event of its document.
When the load()
method on a media
element is invoked, the user agent must run the media element load
algorithm.
The media element load algorithm consists of the following steps.
Abort any already-running instance of the resource selection algorithm for this element.
If there are any tasks from the media element's media element event task source in one of the task queues, then remove those tasks.
Basically, pending events and callbacks for the media element are discarded when the media element starts loading a new resource.
If the media element's networkState
is set to NETWORK_LOADING
or NETWORK_IDLE
, queue a task to fire a
simple event named abort
at the media
element.
If the media element's networkState
is not set to NETWORK_EMPTY
, then run these
substeps:
Queue a task to fire a simple event named emptied
at the media element.
If a fetching process is in progress for the media element, the user agent should stop it.
If readyState
is not set to HAVE_NOTHING
, then set it to that state.
If the paused
attribute is false, then set it to
true.
If seeking
is true, set it to false.
Set the current playback position to 0.
Set the official playback position to 0.
If this changed the official playback position, then queue a task
to fire a simple event named timeupdate
at the media element.
Set the timeline offset to Not-a-Number (NaN).
Update the duration
attribute to Not-a-Number
(NaN).
The user agent will not fire a durationchange
event for this particular change of
the duration.
Set the playbackRate
attribute to the value of
the defaultPlaybackRate
attribute.
Set the error
attribute to null and the
autoplaying flag to true.
Invoke the media element's resource selection algorithm.
Playback of any previously playing media resource for this element stops.
The resource selection algorithm for a media element is as follows. This algorithm is always invoked as part of a task, but one of the first steps in the algorithm is to return and continue running the remaining steps in parallel. In addition, this algorithm interacts closely with the event loop mechanism; in particular, it has synchronous sections (which are triggered as part of the event loop algorithm). Steps in such sections are marked with ⌛.
Set the element's networkState
attribute to
the NETWORK_NO_SOURCE
value.
Set the element's show poster flag to true.
Set the media element's delaying-the-load-event flag to true (this delays the load event).
Await a stable state, allowing the task that invoked this algorithm to continue. The synchronous section consists of all the remaining steps of this algorithm until the algorithm says the synchronous section has ended. (Steps in synchronous sections are marked with ⌛.)
⌛ If the media element's blocked-on-parser flag is false, then populate the list of pending text tracks.
⌛ If the media element has an assigned media provider object, then let mode be object.
⌛ Otherwise, if the media element has no assigned media provider
object but has a src
attribute, then let mode be attribute.
⌛ Otherwise, if the media element does not have an assigned media provider
object and does not have a src
attribute, but does have a source
element child, then
let mode be children and let candidate
be the first such source
element child in tree order.
⌛ Otherwise the media element has no assigned media provider
object and has neither a src
attribute nor a source
element child: set the
networkState
to NETWORK_EMPTY
, and abort these steps; the
synchronous section ends.
⌛ Set the media element's networkState
to NETWORK_LOADING
.
⌛ Queue a task to fire a simple event named loadstart
at the media element.
Run the appropriate steps from the following list:
⌛ Set the currentSrc
attribute to
the empty string.
End the synchronous section, continuing the remaining steps in parallel.
Run the resource fetch algorithm with the assigned media provider object. If that algorithm returns without aborting this one, then the load failed.
Failed with media provider: Reaching this step indicates that the media resource failed to load. Queue a task to run the dedicated media source failure steps.
Wait for the task queued by the previous step to have executed.
Abort these steps. The element won't attempt to load another resource until this algorithm is triggered again.
⌛ If the src
attribute's value is the empty string, then end the synchronous section, and jump
down to the failed with attribute step below.
⌛ Let absolute URL be the absolute URL that
would have resulted from resolving the URL
specified by the src
attribute's value relative to the
media element when the src
attribute was last
changed.
⌛ If absolute URL was obtained successfully, set the currentSrc
attribute to absolute
URL.
End the synchronous section, continuing the remaining steps in parallel.
If absolute URL was obtained successfully, run the resource fetch algorithm with absolute URL. If that algorithm returns without aborting this one, then the load failed.
Failed with attribute: Reaching this step indicates that the media resource failed to load or that the given URL could not be resolved. Queue a task to run the dedicated media source failure steps.
Wait for the task queued by the previous step to have executed.
Abort these steps. The element won't attempt to load another resource until this algorithm is triggered again.
⌛ Let pointer be a position defined by two adjacent nodes in the media element's child list, treating the start of the list (before the first child in the list, if any) and end of the list (after the last child in the list, if any) as nodes in their own right. One node is the node before pointer, and the other node is the node after pointer. Initially, let pointer be the position between the candidate node and the next node, if there are any, or the end of the list, if it is the last node.
As nodes are inserted and removed into the media element, pointer must be updated as follows:
Other changes don't affect pointer.
⌛ Process candidate: If candidate does not have a
src
attribute, or if its src
attribute's value is the empty string, then end the
synchronous section, and jump down to the failed with elements step
below.
⌛ Let absolute URL be the absolute URL that
would have resulted from resolving the URL
specified by candidate's src
attribute's value relative to the candidate when the src
attribute was last changed.
⌛ If absolute URL was not obtained successfully, then end the synchronous section, and jump down to the failed with elements step below.
⌛ If candidate has a type
attribute whose value, when parsed as a MIME
type (including any codecs described by the codecs
parameter, for
types that define that parameter), represents a type that the user agent knows it cannot
render, then end the synchronous section, and jump down to the failed with elements step below.
⌛ Set the currentSrc
attribute to absolute URL.
End the synchronous section, continuing the remaining steps in parallel.
Run the resource fetch algorithm with absolute URL. If that algorithm returns without aborting this one, then the load failed.
Failed with elements: Queue a task to fire a simple
event named error
at the candidate element.
Await a stable state. The synchronous section consists of all the remaining steps of this algorithm until the algorithm says the synchronous section has ended. (Steps in synchronous sections are marked with ⌛.)
⌛ Forget the media element's media-resource-specific tracks.
⌛ Find next candidate: Let candidate be null.
⌛ Search loop: If the node after pointer is the end of the list, then jump to the waiting step below.
⌛ If the node after pointer is a source
element,
let candidate be that element.
⌛ Advance pointer so that the node before pointer is now the node that was after pointer, and the node after pointer is the node after the node that used to be after pointer, if any.
⌛ If candidate is null, jump back to the search loop step. Otherwise, jump back to the process candidate step.
⌛ Waiting: Set the element's networkState
attribute to the NETWORK_NO_SOURCE
value.
⌛ Set the element's show poster flag to true.
⌛ Queue a task to set the element's delaying-the-load-event flag to false. This stops delaying the load event.
End the synchronous section, continuing the remaining steps in parallel.
Wait until the node after pointer is a node other than the end of the list. (This step might wait forever.)
Await a stable state. The synchronous section consists of all the remaining steps of this algorithm until the algorithm says the synchronous section has ended. (Steps in synchronous sections are marked with ⌛.)
⌛ Set the element's delaying-the-load-event flag back to true (this delays the load event again, in case it hasn't been fired yet).
⌛ Set the networkState
back to NETWORK_LOADING
.
⌛ Jump back to the find next candidate step above.
The dedicated media source failure steps are the following steps:
Set the error
attribute to a new
MediaError
object whose code
attribute
is set to MEDIA_ERR_SRC_NOT_SUPPORTED
.
Set the element's networkState
attribute to
the NETWORK_NO_SOURCE
value.
Set the element's show poster flag to true.
Fire a simple event named error
at
the media element.
Set the element's delaying-the-load-event flag to false. This stops delaying the load event.
The resource fetch algorithm for a media element and a given absolute URL or media provider object is as follows:
If the algorithm was invoked with a URL, then let mode be remote, otherwise let mode be local.
If mode is local, then let the current media resource be the resource given by the absolute URL passed to this algorithm; otherwise, let the current media resource be the resource given by the media provider object. Either way, the current media resource is now the element's media resource.
Remove all media-resource-specific text tracks from the media element's list of pending text tracks, if any.
Run the appropriate steps from the following list:
Optionally, run the following substeps. This is the expected behaviour if the user agent
intends to not attempt to fetch the resource until the user requests it explicitly (e.g. as
a way to implement the preload
attribute's none
keyword).
Set the networkState
to NETWORK_IDLE
.
Queue a task to fire a simple event named suspend
at the element.
Queue a task to set the element's delaying-the-load-event flag to false. This stops delaying the load event.
Wait for the task to be run.
Wait for an implementation-defined event (e.g. the user requesting that the media element begin playback).
Set the element's delaying-the-load-event flag back to true (this delays the load event again, in case it hasn't been fired yet).
Set the networkState
to NETWORK_LOADING
.
Perform a potentially CORS-enabled fetch of the current media
resource's absolute URL, with the mode being the state of the
media element's crossorigin
content
attribute, the origin being the origin of the media element's
node document, and the default origin behaviour set to taint.
The resource obtained in this fashion, if any, contains the media data. It can
be CORS-same-origin or CORS-cross-origin; this affects whether
subtitles referenced in the media data are exposed in the API and, for
video
elements, whether a canvas
gets tainted when the video is drawn
on it.
The stall timeout is a user-agent defined length of time, which should be about
three seconds. When a media element that is actively attempting to obtain
media data has failed to receive any data for a duration equal to the stall
timeout, the user agent must queue a task to fire a simple
event named stalled
at the element.
User agents may allow users to selectively block or slow media data downloads. When a media element's download has been blocked altogether, the user agent must act as if it was stalled (as opposed to acting as if the connection was closed). The rate of the download may also be throttled automatically by the user agent, e.g. to balance the download with other connections sharing the same bandwidth.
User agents may decide to not download more content at any time, e.g.
after buffering five minutes of a one hour media resource, while waiting for the user to decide
whether to play the resource or not, while waiting for user input in an interactive resource, or
when the user navigates away from the page. When a media element's download has
been suspended, the user agent must queue a task, to set the networkState
to NETWORK_IDLE
and fire a simple event named
suspend
at the element. If and when downloading of the
resource resumes, the user agent must queue a task to set the networkState
to NETWORK_LOADING
. Between the queuing of these tasks,
the load is suspended (so progress
events don't fire,
as described above).
The preload
attribute provides a hint
regarding how much buffering the author thinks is advisable, even in the absence of the autoplay
attribute.
When a user agent decides to completely stall a download, e.g. if it is waiting until the user starts playback before downloading any further content, the user agent must queue a task to set the element's delaying-the-load-event flag to false. This stops delaying the load event.
The user agent may use whatever means necessary to fetch the resource (within the constraints put forward by this and other specifications); for example, reconnecting to the server in the face of network errors, using HTTP range retrieval requests, or switching to a streaming protocol. The user agent must consider a resource erroneous only if it has given up trying to fetch it.
To determine the format of the media resource, the user agent must use the rules for sniffing audio and video specifically.
While the load is not suspended (see below), every 350ms (±200ms) or for every byte
received, whichever is least frequent, queue a task to fire a simple
event named progress
at the element.
The networking task source tasks to process the data as it is being fetched must each immediately queue a task to run the first appropriate steps from the media data processing steps list below. (A new task is used for this so that the work described below occurs relative to the media element event task source rather than the networking task source.)
When the networking task source has queued the last task as part of fetching the media resource (i.e. once the download has completed), if the fetching process completes without errors, including decoding the media data, and if all of the data is available to the user agent without network access, then, the user agent must move on to the final step below. This might never happen, e.g. when streaming an infinite resource such as Web radio, or if the resource is longer than the user agent's ability to cache data.
While the user agent might still need network access to obtain parts of the media resource, the user agent must remain on this step.
For example, if the user agent has discarded the first half of a video, the
user agent will remain at this step even once the playback has
ended, because there is always the chance the user will seek back to the start. In fact,
in this situation, once playback has ended, the user agent
will end up firing a suspend
event, as described
earlier.
The resource described by the current media resource, if any, contains the media data. It is CORS-same-origin.
If the current media resource is a raw data stream (e.g. from a
File
object), then to determine the format of the media resource,
the user agent must use the rules for sniffing audio and video specifically.
Otherwise, if the data stream is pre-decoded, then the format is the format given by the
relevant specification.
Whenever new data for the current media resource becomes available, queue a task to run the first appropriate steps from the media data processing steps list below.
When the current media resource is permanently exhausted (e.g. all the bytes of
a Blob
have been processed), if there were no decoding errors, then the user
agent must move on to the final step below. This might never happen, e.g. if the
current media resource is a MediaStream
.
The media data processing steps list is as follows:
DNS errors, HTTP 4xx and 5xx errors (and equivalents in other protocols), and other fatal network errors that occur before the user agent has established whether the current media resource is usable, as well as the file using an unsupported container format, or using unsupported codecs for all the data, must cause the user agent to execute the following steps:
The user agent should cancel the fetching process.
Abort this subalgorithm, returning to the resource selection algorithm.
Create an AudioTrack
object to represent the audio track.
Update the media element's audioTracks
attribute's AudioTrackList
object with the new AudioTrack
object.
Let enable be unknown.
If either the media resource or the address of the current media resource indicate a particular set of audio tracks to enable, or if the user agent has information that would facilitate the selection of specific audio tracks to improve the user's experience, then: if this audio track is one of the ones to enable, then set enable to true, otherwise, set enable to false.
This could be triggered by Media Fragments URI fragment identifier syntax, but it could also be triggered e.g. by the user agent selecting a 5.1 surround sound audio track over a stereo audio track. [[!MEDIAFRAG]]
If enable is still unknown, then, if the media element does not yet have an enabled audio track, then set enable to true, otherwise, set enable to false.
If enable is true, then enable this audio track, otherwise, do not enable this audio track.
Fire a trusted event with the name addtrack
, that does not bubble and is not cancelable,
and that uses the TrackEvent
interface, with the track
attribute initialised to the new
AudioTrack
object, at this AudioTrackList
object.
Create a VideoTrack
object to represent the video track.
Update the media element's videoTracks
attribute's VideoTrackList
object with the new VideoTrack
object.
Let enable be unknown.
If either the media resource or the address of the current media resource indicate a particular set of video tracks to enable, or if the user agent has information that would facilitate the selection of specific video tracks to improve the user's experience, then: if this video track is the first such video track, then set enable to true, otherwise, set enable to false.
This could again be triggered by Media Fragments URI fragment identifier syntax.
If enable is still unknown, then, if the media element does not yet have a selected video track, then set enable to true, otherwise, set enable to false.
If enable is true, then select this track and unselect any
previously selected video tracks, otherwise, do not select this video track. If other tracks
are unselected, then a change
event will be fired.
Fire a trusted event with the name addtrack
, that does not bubble and is not cancelable,
and that uses the TrackEvent
interface, with the track
attribute initialised to the new
VideoTrack
object, at this VideoTrackList
object.
This indicates that the resource is usable. The user agent must follow these substeps:
Establish the media timeline for the purposes of the current playback position and the earliest possible position, based on the media data.
Update the timeline offset to the date and time that corresponds to the zero time in the media timeline established in the previous step, if any. If no explicit time and date is given by the media resource, the timeline offset must be set to Not-a-Number (NaN).
Set the current playback position and the official playback position to the earliest possible position.
Update the duration
attribute with the time of
the last frame of the resource, if known, on the media timeline established
above. If it is not known (e.g. a stream that is in principle infinite), update the duration
attribute to the value positive Infinity.
The user agent will queue a task
to fire a simple event named durationchange
at the element at this point.
For video
elements, set the videoWidth
and videoHeight
attributes, and queue a task
to fire a simple event named resize
at
the media element.
Further resize
events will be fired
if the dimensions subsequently change.
Set the readyState
attribute to HAVE_METADATA
.
A loadedmetadata
DOM event
will be fired as part of setting the readyState
attribute to a new value.
Let jumped be false.
If the media element's default playback start position is greater than zero, then seek to that time, and let jumped be true.
Let the media element's default playback start position be zero.
Let the initial playback position be zero.
If either the media resource or the address of the current media resource indicate a particular start time, then set the initial playback position to that time and, if jumped is still false, seek to that time and let jumped be true.
For example, with media formats that support the Media Fragments URI fragment identifier syntax, the fragment identifier can be used to indicate a start position. [[!MEDIAFRAG]]
If there is no enabled audio track, then
enable an audio track. This will cause a change
event to be fired.
If there is no selected video track,
then select a video track. This will cause a change
event to be fired.
If the media element has a current media controller, then: if jumped is true and the initial playback position, relative to the current media controller's timeline, is greater than the current media controller's media controller position, then seek the media controller to the media element's initial playback position, relative to the current media controller's timeline; otherwise, seek the media element to the media controller position, relative to the media element's timeline.
Once the readyState
attribute reaches HAVE_CURRENT_DATA
, after
the loadeddata
event has been fired, set the
element's delaying-the-load-event flag to false. This stops delaying the load event.
A user agent that is attempting to reduce network usage while still fetching
the metadata for each media resource would also stop buffering at this point,
following the rules described previously, which involve the
networkState
attribute switching to the NETWORK_IDLE
value and a suspend
event firing.
The user agent is required to determine the duration of the media resource and go through this step before playing.
Fire a simple event named progress
at the media element.
Set the networkState
to NETWORK_IDLE
and fire a simple event named
suspend
at the media element.
If the user agent ever discards any media data and then needs to resume the
network activity to obtain it again, then it must queue a task to set the networkState
to NETWORK_LOADING
.
If the user agent can keep the media resource loaded, then the algorithm will continue to its final step below, which aborts the algorithm.
Fatal network errors that occur after the user agent has established whether the current media resource is usable (i.e. once the media element's
readyState
attribute is no longer HAVE_NOTHING
) must cause the user agent to execute the
following steps:
The user agent should cancel the fetching process.
Set the error
attribute to a new
MediaError
object whose code
attribute
is set to MEDIA_ERR_NETWORK
.
Set the element's networkState
attribute
to the NETWORK_IDLE
value.
Set the element's delaying-the-load-event flag to false. This stops delaying the load event.
Fire a simple event named error
at
the media element.
Abort the overall resource selection algorithm.
Fatal errors in decoding the media data that occur after the user agent has
established whether the current media resource is usable (i.e. once the media element's
readyState
attribute is no longer HAVE_NOTHING
) must cause the
user agent to execute the following steps:
The user agent should cancel the fetching process.
Set the error
attribute to a new
MediaError
object whose code
attribute
is set to MEDIA_ERR_DECODE
.
Set the element's networkState
attribute
to the NETWORK_IDLE
value.
Set the element's delaying-the-load-event flag to false. This stops delaying the load event.
Fire a simple event named error
at
the media element.
Abort the overall resource selection algorithm.
The fetching process is aborted by the user, e.g. because the user
pressed a "stop" button, the user agent must execute the following steps. These steps are not
followed if the load()
method itself is invoked while
these steps are running, as the steps above handle that particular kind of abort.
The user agent should cancel the fetching process.
Set the error
attribute to a new
MediaError
object whose code
attribute
is set to MEDIA_ERR_ABORTED
.
Fire a simple event named abort
at
the media element.
If the media element's readyState
attribute has a value equal to HAVE_NOTHING
, set
the element's networkState
attribute to the
NETWORK_EMPTY
value, set the element's
show poster flag to true, and fire a simple event named emptied
at the element.
Otherwise, set the element's networkState
attribute to the NETWORK_IDLE
value.
Set the element's delaying-the-load-event flag to false. This stops delaying the load event.
Abort the overall resource selection algorithm.
The server returning data that is partially usable but cannot be optimally rendered must cause the user agent to render just the bits it can handle, and ignore the rest.
If the media data is CORS-same-origin, run the steps to expose a media-resource-specific text track with the relevant data.
Cross-origin videos do not expose their subtitles, since that would allow attacks such as hostile sites reading subtitles from confidential videos on a user's intranet.
Final step: If the user agent ever reaches this step (which can only happen if the entire resource gets loaded and kept available): abort the overall resource selection algorithm.
When a media element is to forget the media element's media-resource-specific
tracks, the user agent must remove from the media element's list of text
tracks all the media-resource-specific
text tracks, then empty the media element's audioTracks
attribute's AudioTrackList
object,
then empty the media element's videoTracks
attribute's VideoTrackList
object. No events (in particular, no removetrack
events) are fired as part of this; the error
and emptied
events, fired by the algorithms that invoke this one, can be used instead.
The preload
attribute is an enumerated
attribute. The following table lists the keywords and states for the attribute — the
keywords in the left column map to the states in the cell in the second column on the same row as
the keyword. The attribute can be changed even once the media resource is being
buffered or played; the descriptions in the table below are to be interpreted with that in
mind.
Keyword | State | Brief description |
---|---|---|
none
| None | Hints to the user agent that either the author does not expect the user to need the media resource, or that the server wants to minimise unnecessary traffic. This state does not provide a hint regarding how aggressively to actually download the media resource if buffering starts anyway (e.g. once the user hits "play"). |
metadata
| Metadata | Hints to the user agent that the author does not expect the user to need the media resource, but that fetching the resource metadata (dimensions, track list, duration, etc), and maybe even the first few frames, is reasonable. If the user agent precisely fetches no more than the metadata, then the media element will end up with its readyState attribute set to HAVE_METADATA ; typically though, some frames will be obtained as well and it will probably be HAVE_CURRENT_DATA or HAVE_FUTURE_DATA .
When the media resource is playing, hints to the user agent that bandwidth is to be considered scarce, e.g. suggesting throttling the download so that the media data is obtained at the slowest possible rate that still maintains consistent playback.
|
auto
| Automatic | Hints to the user agent that the user agent can put the user's needs first without risk to the server, up to and including optimistically downloading the entire resource. |
The empty string is also a valid keyword, and maps to the Automatic state. The attribute's missing value default is user-agent defined, though the Metadata state is suggested as a compromise between reducing server load and providing an optimal user experience.
Authors might switch the attribute from "none
" or "metadata
" to "auto
" dynamically once the user begins playback. For
example, on a page with many videos this might be used to indicate that the many videos are not to
be downloaded unless requested, but that once one is requested it is to be downloaded
aggressively.
The preload
attribute is intended to provide a hint to
the user agent about what the author thinks will lead to the best user experience. The attribute
may be ignored altogether, for example based on explicit user preferences or based on the
available connectivity.
The preload
IDL attribute must
reflect the content attribute of the same name, limited to only known
values.
The autoplay
attribute can override the
preload
attribute (since if the media plays, it naturally
has to buffer first, regardless of the hint given by the preload
attribute). Including both is not an error, however.
buffered
Returns a TimeRanges
object that represents the ranges of the media
resource that the user agent has buffered.
The buffered
attribute must return a new
static normalised TimeRanges
object that represents the ranges of the
media resource, if any, that the user agent has buffered, at the time the attribute
is evaluated. Users agents must accurately determine the ranges available, even for media streams
where this can only be determined by tedious inspection.
Typically this will be a single range anchored at the zero point, but if, e.g. the user agent uses HTTP range requests in response to seeking, then there could be multiple ranges.
User agents may discard previously buffered data.
Thus, a time position included within a range of the objects return by the buffered
attribute at one time can end up being not included in
the range(s) of objects returned by the same attribute at later times.
duration
Returns the length of the media resource, in seconds, assuming that the start of the media resource is at time zero.
Returns NaN if the duration isn't available.
Returns Infinity for unbounded streams.
currentTime
[ = value ]Returns the official playback position, in seconds.
Can be set, to seek to the given time.
Will throw an InvalidStateError
exception if there is no selected media
resource or if there is a current media controller.
A media resource has a media timeline that maps times (in seconds) to positions in the media resource. The origin of a timeline is its earliest defined position. The duration of a timeline is its last defined position.
Establishing the media
timeline: If the media resource somehow specifies an explicit timeline whose
origin is not negative (i.e. gives each frame a specific time offset and gives the first frame a
zero or positive offset), then the media timeline should be that timeline. (Whether
the media resource can specify a timeline or not depends on the media resource's format.) If the media resource specifies an
explicit start time and date, then that time and date should be considered the zero point
in the media timeline; the timeline offset will be the time and date,
exposed using the getStartDate()
method.
If the media resource has a discontinuous timeline, the user agent must extend the timeline used at the start of the resource across the entire resource, so that the media timeline of the media resource increases linearly starting from the earliest possible position (as defined below), even if the underlying media data has out-of-order or even overlapping time codes.
For example, if two clips have been concatenated into one video file, but the video format exposes the original times for the two clips, the video data might expose a timeline that goes, say, 00:15..00:29 and then 00:05..00:38. However, the user agent would not expose those times; it would instead expose the times as 00:15..00:29 and 00:29..01:02, as a single video.
In the rare case of a media resource that does not have an explicit timeline, the zero time on the media timeline should correspond to the first frame of the media resource. In the even rarer case of a media resource with no explicit timings of any kind, not even frame durations, the user agent must itself determine the time for each frame in a user-agent-defined manner.
An example of a file format with no explicit timeline but with explicit frame
durations is the Animated GIF format. An example of a file format with no explicit timings at all
is the JPEG-push format (multipart/x-mixed-replace
with JPEG frames, often
used as the format for MJPEG streams).
If, in the case of a resource with no timing information, the user agent will nonetheless be able to seek to an earlier point than the first frame originally provided by the server, then the zero time should correspond to the earliest seekable time of the media resource; otherwise, it should correspond to the first frame received from the server (the point in the media resource at which the user agent began receiving the stream).
At the time of writing, there is no known format that lacks explicit frame time offsets yet still supports seeking to a frame before the first frame sent by the server.
Consider a stream from a TV broadcaster, which begins streaming on a sunny Friday afternoon in
October, and always sends connecting user agents the media data on the same media timeline, with
its zero time set to the start of this stream. Months later, user agents connecting to this
stream will find that the first frame they receive has a time with millions of seconds. The getStartDate()
method would always return the date that the
broadcast started; this would allow controllers to display real times in their scrubber (e.g.
"2:30pm") rather than a time relative to when the broadcast began ("8 months, 4 hours, 12
minutes, and 23 seconds").
Consider a stream that carries a video with several concatenated fragments, broadcast by a
server that does not allow user agents to request specific times but instead just streams the
video data in a predetermined order, with the first frame delivered always being identified as
the frame with time zero. If a user agent connects to this stream and receives fragments defined
as covering timestamps 2010-03-20 23:15:00 UTC to 2010-03-21 00:05:00 UTC and 2010-02-12 14:25:00
UTC to 2010-02-12 14:35:00 UTC, it would expose this with a media timeline starting
at 0s and extending to 3,600s (one hour). Assuming the streaming server disconnected at the end
of the second clip, the duration
attribute would then
return 3,600. The getStartDate()
method would return a
Date
object with a time corresponding to 2010-03-20 23:15:00 UTC. However, if a
different user agent connected five minutes later, it would (presumably) receive
fragments covering timestamps 2010-03-20 23:20:00 UTC to 2010-03-21 00:05:00 UTC and 2010-02-12
14:25:00 UTC to 2010-02-12 14:35:00 UTC, and would expose this with a media timeline
starting at 0s and extending to 3,300s (fifty five minutes). In this case, the getStartDate()
method would return a Date
object
with a time corresponding to 2010-03-20 23:20:00 UTC.
In both of these examples, the seekable
attribute
would give the ranges that the controller would want to actually display in its UI; typically, if
the servers don't support seeking to arbitrary times, this would be the range of time from the
moment the user agent connected to the stream up to the latest frame that the user agent has
obtained; however, if the user agent starts discarding earlier information, the actual range
might be shorter.
In any case, the user agent must ensure that the earliest possible position (as defined below) using the established media timeline, is greater than or equal to zero.
The media timeline also has an associated clock. Which clock is used is user-agent defined, and may be media resource-dependent, but it should approximate the user's wall clock.
All the media elements that share current media controller use the same clock for their media timeline.
Media elements have a current playback position, which must initially (i.e. in the absence of media data) be zero seconds. The current playback position is a time on the media timeline.
Media elements also have an official playback position, which must initially be set to zero seconds. The official playback position is an approximation of the current playback position that is kept stable while scripts are running.
Media elements also have a default playback start position, which must initially be set to zero seconds. This time is used to allow the element to be seeked even before the media is loaded.
Each media element has a show poster flag. When a media
element is created, this flag must be set to true. This flag is used to control when the
user agent is to show a poster frame for a video
element instead of showing the video
contents.
The currentTime
attribute must, on
getting, return the media element's default playback start position,
unless that is zero, in which case it must return the element's official playback
position. The returned value must be expressed in seconds. On setting, if the media
element has a current media controller, then the user agent must throw an
InvalidStateError
exception; otherwise, if the media element's readyState
is HAVE_NOTHING
, then it must set the media
element's default playback start position to the new value; otherwise, it must
set the official playback position to the new value and then seek to the new value. The new value must be interpreted as being in
seconds.
If the media resource is a streaming resource, then the user agent might be unable to obtain certain parts of the resource after it has expired from its buffer. Similarly, some media resources might have a media timeline that doesn't start at zero. The earliest possible position is the earliest position in the stream or resource that the user agent can ever obtain again. It is also a time on the media timeline.
The earliest possible position is not explicitly exposed in the API;
it corresponds to the start time of the first range in the seekable
attribute's TimeRanges
object, if any, or
the current playback position otherwise.
When the earliest possible position changes, then: if the current playback
position is before the earliest possible position, the user agent must seek to the earliest possible position; otherwise, if
the user agent has not fired a timeupdate
event at the
element in the past 15 to 250ms and is not still running event handlers for such an event, then
the user agent must queue a task to fire a simple event named timeupdate
at the element.
Because of the above requirement and the requirement in the resource fetch algorithm that kicks in when the metadata of the clip becomes known, the current playback position can never be less than the earliest possible position.
If at any time the user agent learns that an audio or video track has ended and all media
data relating to that track corresponds to parts of the media timeline that
are before the earliest possible position, the user agent may queue a
task to first remove the track from the audioTracks
attribute's AudioTrackList
object or the videoTracks
attribute's VideoTrackList
object as
appropriate and then fire a trusted event with the name removetrack
, that does not bubble and is not cancelable, and that
uses the TrackEvent
interface, with the track
attribute initialised to the AudioTrack
or
VideoTrack
object representing the track, at the media element's
aforementioned AudioTrackList
or VideoTrackList
object.
The duration
attribute must return the time
of the end of the media resource, in seconds, on the media timeline. If
no media data is available, then the attributes must return the Not-a-Number (NaN)
value. If the media resource is not known to be bounded (e.g. streaming radio, or a
live event with no announced end time), then the attribute must return the positive Infinity
value.
The user agent must determine the duration of the media resource before playing
any part of the media data and before setting readyState
to a value equal to or greater than HAVE_METADATA
, even if doing so requires fetching multiple
parts of the resource.
When the length of the media resource changes to a known value
(e.g. from being unknown to known, or from a previously established length to a new length) the
user agent must queue a task to fire a simple event named durationchange
at the media element. (The
event is not fired when the duration is reset as part of loading a new media resource.) If the
duration is changed such that the current playback position ends up being greater
than the time of the end of the media resource, then the user agent must also seek to the time of the end of the media resource.
If an "infinite" stream ends for some reason, then the duration would change
from positive Infinity to the time of the last frame or sample in the stream, and the durationchange
event would be fired. Similarly, if the
user agent initially estimated the media resource's duration instead of determining
it precisely, and later revises the estimate based on new information, then the duration would
change and the durationchange
event would be
fired.
Some video files also have an explicit date and time corresponding to the zero time in the media timeline, known as the timeline offset. Initially, the timeline offset must be set to Not-a-Number (NaN).
The getStartDate()
method must return a new Date
object representing the current
timeline offset.
The loop
attribute is a boolean
attribute that, if specified, indicates that the media element is to seek back
to the start of the media resource upon reaching the end.
The loop
attribute has no effect while the element has a
current media controller.
The loop
IDL attribute must reflect
the content attribute of the same name.
readyState
Returns a value that expresses the current state of the element with respect to rendering the current playback position, from the codes in the list below.
Media elements have a ready state, which describes to what degree they are ready to be rendered at the current playback position. The possible values are as follows; the ready state of a media element at any particular time is the greatest value describing the state of the element:
HAVE_NOTHING
(numeric value 0)No information regarding the media resource is available. No data for the
current playback position is available. Media
elements whose networkState
attribute are set
to NETWORK_EMPTY
are always in the HAVE_NOTHING
state.
HAVE_METADATA
(numeric value 1)Enough of the resource has been obtained that the duration of the resource is available.
In the case of a video
element, the dimensions of the video are also available. The
API will no longer throw an exception when seeking. No media data is available for
the immediate current playback position.
HAVE_CURRENT_DATA
(numeric value 2)Data for the immediate current playback position is available, but either not
enough data is available that the user agent could successfully advance the current
playback position in the direction of playback at all without immediately
reverting to the HAVE_METADATA
state, or there is no
more data to obtain in the direction of playback. For example, in video this
corresponds to the user agent having data from the current frame, but not the next frame, when
the current playback position is at the end of the current frame; and to when playback has ended.
HAVE_FUTURE_DATA
(numeric value 3)Data for the immediate current playback position is available, as well as
enough data for the user agent to advance the current playback position in the
direction of playback at least a little without immediately reverting to the HAVE_METADATA
state, and the text tracks are
ready. For example, in video this corresponds to the user agent having data for at least
the current frame and the next frame when the current playback position is at the
instant in time between the two frames, or to the user agent having the video data for the
current frame and audio data to keep playing at least a little when the current playback
position is in the middle of a frame. The user agent cannot be in this state if playback has ended, as the current playback position
can never advance in this case.
HAVE_ENOUGH_DATA
(numeric value 4)All the conditions described for the HAVE_FUTURE_DATA
state are met, and, in addition,
either of the following conditions is also true:
In practice, the difference between HAVE_METADATA
and HAVE_CURRENT_DATA
is negligible. Really the only time
the difference is relevant is when painting a video
element onto a
canvas
, where it distinguishes the case where something will be drawn (HAVE_CURRENT_DATA
or greater) from the case where
nothing is drawn (HAVE_METADATA
or less). Similarly,
the difference between HAVE_CURRENT_DATA
(only
the current frame) and HAVE_FUTURE_DATA
(at least
this frame and the next) can be negligible (in the extreme, only one frame). The only time that
distinction really matters is when a page provides an interface for "frame-by-frame"
navigation.
When the ready state of a media element whose networkState
is not NETWORK_EMPTY
changes, the user agent must follow the steps
given below:
Apply the first applicable set of substeps from the following list:
HAVE_NOTHING
,
and the new ready state is HAVE_METADATA
Queue a task to fire a simple event named loadedmetadata
at the element.
Before this task is run, as part of the event loop mechanism, the
rendering will have been updated to resize the video
element if appropriate.
HAVE_METADATA
and the new ready state is HAVE_CURRENT_DATA
or greaterIf this is the first time this occurs for this media
element since the load()
algorithm was last
invoked, the user agent must queue a task to fire a simple event
named loadeddata
at the element.
If the new ready state is HAVE_FUTURE_DATA
or HAVE_ENOUGH_DATA
, then the relevant steps
below must then be run also.
HAVE_FUTURE_DATA
or more, and the new ready state is
HAVE_CURRENT_DATA
or lessIf the media element was potentially
playing before its readyState
attribute
changed to a value lower than HAVE_FUTURE_DATA
, and the element has not
ended playback, and playback has not stopped due to errors,
paused for user interaction, or paused for in-band content, the user
agent must queue a task to fire a simple event named timeupdate
at the element, and queue a task
to fire a simple event named waiting
at
the element.
HAVE_CURRENT_DATA
or less, and the new ready state
is HAVE_FUTURE_DATA
The user agent must queue a task to fire a simple event named
canplay
at the element.
If the element's paused
attribute is false, the user
agent must queue a task to fire a simple event named playing
at the element.
HAVE_ENOUGH_DATA
If the previous ready state was HAVE_CURRENT_DATA
or less, the user agent must
queue a task to fire a simple event named canplay
at the element, and, if the element's paused
attribute is false, queue a task to
fire a simple event named playing
at the element.
If the autoplaying flag is true, and the paused
attribute is true, and the media element
has an autoplay
attribute specified, and the
media element's node document's active sandboxing flag set
does not have the sandboxed automatic features browsing context flag set, then
the user agent may also run the following substeps:
paused
attribute to false.play
at the element.playing
at the element.User agents do not need to support autoplay, and it is suggested that user
agents honor user preferences on the matter. Authors are urged to use the autoplay
attribute rather than using script to force the
video to play, so as to allow the user to override the behaviour if so desired.
In any case, the user agent must finally queue a task to fire a simple
event named canplaythrough
at the element.
If the media element has a current media controller, then report the controller state for the media element's current media controller.
It is possible for the ready state of a media element to jump between these states
discontinuously. For example, the state of a media element can jump straight from HAVE_METADATA
to HAVE_ENOUGH_DATA
without passing through the HAVE_CURRENT_DATA
and HAVE_FUTURE_DATA
states.
The readyState
IDL attribute must, on
getting, return the value described above that describes the current ready state of the
media element.
The autoplay
attribute is a boolean
attribute. When present, the user agent (as described in the algorithm
described herein) will automatically begin playback of the media resource as
soon as it can do so without stopping.
Authors are urged to use the autoplay
attribute rather than using script to trigger automatic playback, as this allows the user to
override the automatic playback when it is not desired, e.g. when using a screen reader. Authors
are also encouraged to consider not using the automatic playback behaviour at all, and instead to
let the user agent wait for the user to start playback explicitly.
The autoplay
IDL attribute must
reflect the content attribute of the same name.
paused
Returns true if playback is paused; false otherwise.
ended
Returns true if playback has reached the end of the media resource.
defaultPlaybackRate
[ = value ]Returns the default rate of playback, for when the user is not fast-forwarding or reversing through the media resource.
Can be set, to change the default rate of playback.
The default rate has no direct effect on playback, but if the user switches to a fast-forward mode, when they return to the normal playback mode, it is expected that the rate of playback will be returned to the default rate of playback.
When the element has a current media controller, the defaultPlaybackRate
attribute is ignored and the
current media controller's defaultPlaybackRate
is used instead.
playbackRate
[ = value ]Returns the current rate playback, where 1.0 is normal speed.
Can be set, to change the rate of playback.
When the element has a current media controller, the playbackRate
attribute is ignored and the current
media controller's playbackRate
is
used instead.
played
Returns a TimeRanges
object that represents the ranges of the media
resource that the user agent has played.
play
()Sets the paused
attribute to false, loading the
media resource and beginning playback if necessary. If the playback had ended, will
restart it from the start.
pause
()Sets the paused
attribute to true, loading the
media resource if necessary.
The paused
attribute represents whether the
media element is paused or not. The attribute must initially be true.
A media element is a blocked media element if its readyState
attribute is in the HAVE_NOTHING
state, the HAVE_METADATA
state, or the HAVE_CURRENT_DATA
state, or if the element has
paused for user interaction or paused for in-band content.
A media element is said to be potentially playing when its paused
attribute is false, the element has not ended
playback, playback has not stopped due to errors, the element either has no
current media controller or has a current media controller but is not
blocked on its media controller, and the element is not a blocked media
element.
A waiting
DOM event can be fired as a result of an element that is
potentially playing stopping playback due to its readyState
attribute changing to a value lower than HAVE_FUTURE_DATA
.
A media element is said to have ended playback when:
readyState
attribute is HAVE_METADATA
or greater, and
Either:
loop
attribute specified, or the media element has
a current media controller.
Or:
The ended
attribute must return true if, the
last time the event loop reached step 1, the media element had
ended playback and the direction of playback was forwards, and false
otherwise.
A media element is said to have stopped due to errors when the
element's readyState
attribute is HAVE_METADATA
or greater, and the user agent encounters a non-fatal error during the processing of the
media data, and due to that error, is not able to play the content at the
current playback position.
A media element is said to have paused for user interaction when its
paused
attribute is false, the readyState
attribute is either HAVE_FUTURE_DATA
or HAVE_ENOUGH_DATA
and the user agent has reached a point
in the media resource where the user has to make a selection for the resource to
continue. If the media element has a current media controller when this
happens, then the user agent must report the controller state for the media
element's current media controller. If the media element has a
current media controller when the user makes a selection, allowing playback to
resume, the user agent must similarly report the controller state for the media
element's current media controller.
It is possible for a media element to have both ended playback and paused for user interaction at the same time.
When a media element that is potentially playing stops playing
because it has paused for user interaction, the user agent must queue a
task to fire a simple event named timeupdate
at the element.
A media element is said to have paused for in-band content when its
paused
attribute is false, the readyState
attribute is either HAVE_FUTURE_DATA
or HAVE_ENOUGH_DATA
and the user agent has suspended
playback of the media resource in order to play content that is temporally anchored
to the media resource and has a non-zero length, or to play content that is
temporally anchored to a segment of the media resource but has a length longer than
that segment. If the media element has a current media controller when
this happens, then the user agent must report the controller state for the
media element's current media controller. If the media
element has a current media controller when the user agent unsuspends
playback, the user agent must similarly report the controller state for the
media element's current media controller.
One example of when a media element would be paused for in-band content is when the user agent is playing audio descriptions from an external WebVTT file, and the synthesized speech generated for a cue is longer than the time between the text track cue start time and the text track cue end time.
When the current playback position reaches the end of the media resource when the direction of playback is forwards, then the user agent must follow these steps:
If the media element has a loop
attribute specified and does not have a current media controller, then seek to the earliest possible position of the
media resource and abort these steps.
As defined above, the ended
IDL attribute starts
returning true once the event loop returns to step 1.
Queue a task to fire a simple event named timeupdate
at the media element.
Queue a task that, if the media element does not have a
current media controller, and the media element has still ended
playback, and the direction of playback is still forwards, and paused is false, changes paused to true and fires a
simple event named pause
at the media
element.
Queue a task to fire a simple event named ended
at the media element.
If the media element has a current media controller, then report the controller state for the media element's current media controller.
When the current playback position reaches the earliest possible
position of the media resource when the direction of playback is
backwards, then the user agent must only queue a task to fire a simple
event named timeupdate
at the element.
The word "reaches" here does not imply that the current playback position needs to have changed during normal playback; it could be via seeking, for instance.
The defaultPlaybackRate
attribute
gives the desired speed at which the media resource is to play, as a multiple of its
intrinsic speed. The attribute is mutable: on getting it must return the last value it was set to,
or 1.0 if it hasn't yet been set; on setting the attribute must be set to the new value.
The defaultPlaybackRate
is used
by the user agent when it exposes a user
interface to the user.
The playbackRate
attribute gives the
effective playback rate (assuming there is no current media controller
overriding it), which is the speed at which the media resource plays, as a multiple
of its intrinsic speed. If it is not equal to the defaultPlaybackRate
, then the implication is that the
user is using a feature such as fast forward or slow motion playback. The attribute is mutable: on
getting it must return the last value it was set to, or 1.0 if it hasn't yet been set; on setting
the attribute must be set to the new value, and the playback will change speed (if the element is
potentially playing and there is no current media controller).
When the defaultPlaybackRate
or playbackRate
attributes change value (either by
being set by script or by being changed directly by the user agent, e.g. in response to user
control) the user agent must queue a task to fire a simple event named
ratechange
at the media element.
The defaultPlaybackRate
and
playbackRate
attributes have no effect when the
media element has a current media controller; the namesake attributes on
the MediaController
object are used instead in that situation.
The played
attribute must return a new static
normalised TimeRanges
object that represents the ranges of points on the
media timeline of the media resource reached through the usual monotonic
increase of the current playback position during normal playback, if any, at the time
the attribute is evaluated.
When the play()
method on a media
element is invoked, the user agent must run the following steps.
If the media element's networkState
attribute has the value NETWORK_EMPTY
, invoke the media element's
resource selection algorithm.
If the playback has ended and the direction of playback is forwards, and the media element does not have a current media controller, seek to the earliest possible position of the media resource.
This will cause the user agent to queue a
task to fire a simple event named timeupdate
at the media element.
If the media element has a current media controller, then bring the media element up to speed with its new media controller.
If the media element's paused
attribute is
true, run the following substeps:
Change the value of paused
to false.
If the show poster flag is true, set the element's show poster flag to false and run the time marches on steps.
Queue a task to fire a simple event named play
at the element.
If the media element's readyState
attribute has the value HAVE_NOTHING
, HAVE_METADATA
, or HAVE_CURRENT_DATA
, queue a task to
fire a simple event named waiting
at the
element.
Otherwise, the media element's readyState
attribute has the value HAVE_FUTURE_DATA
or HAVE_ENOUGH_DATA
: queue a task to
fire a simple event named playing
at the
element.
Set the media element's autoplaying flag to false.
If the media element has a current media controller, then report the controller state for the media element's current media controller.
When the pause()
method is invoked, and when
the user agent is required to pause the media element, the user agent must run the
following steps:
If the media element's networkState
attribute has the value NETWORK_EMPTY
, invoke the media element's
resource selection algorithm.
Run the internal pause steps for the media element.
The internal pause steps for a media element are as follows:
Set the media element's autoplaying flag to false.
If the media element's paused
attribute
is false, run the following steps:
Change the value of paused
to true.
Queue a task to fire a simple
event named timeupdate
at the
element.
Queue a task to fire a simple
event named pause
at the element.
Set the official playback position to the current playback position.
If the media element has a current media controller, then report the controller state for the media element's current media controller.
The effective playback rate is not necessarily the element's playbackRate
. When a media element has a
current media controller, its effective playback rate is the
MediaController
's media controller playback rate. Otherwise, the
effective playback rate is just the element's playbackRate
. Thus, the current media
controller overrides the media element.
If the effective playback rate is positive or zero, then the direction of playback is forwards. Otherwise, it is backwards.
When a media element is potentially playing and
its Document
is a fully active Document
, its current
playback position must increase monotonically at effective playback rate units
of media time per unit time of the media timeline's clock. (This specification always
refers to this as an increase, but that increase could actually be a decrease if
the effective playback rate is negative.)
The effective playback rate can be 0.0, in which case the
current playback position doesn't move, despite playback not being paused (paused
doesn't become true, and the pause
event doesn't fire).
This specification doesn't define how the user agent achieves the appropriate playback rate — depending on the protocol and media available, it is plausible that the user agent could negotiate with the server to have the server provide the media data at the appropriate rate, so that (except for the period between when the rate is changed and when the server updates the stream's playback rate) the client doesn't actually have to drop or interpolate any frames.
Any time the user agent provides a stable state, the official playback position must be set to the current playback position.
While the direction of playback is backwards, any corresponding audio must be muted. While the effective playback rate is so low or so high that the user agent cannot play audio usefully, the corresponding audio must also be muted. If the effective playback rate is not 1.0, the user agent may apply pitch adjustments to the audio as necessary to render it faithfully.
Media elements that are potentially playing
while not in a Document
must not play any video, but should play any
audio component. Media elements must not stop playing just because all references to them have
been removed; only once a media element is in a state where no further audio could ever be played
by that element may the element be garbage collected.
It is possible for an element to which no explicit references exist to play audio,
even if such an element is not still actively playing: for instance, it could have a current
media controller that still has references and can still be unpaused, or it could be
unpaused but stalled waiting for content to buffer, or it could be still buffering, but with a
suspend
event listener that begins playback. Even a
media element whose media resource has no audio tracks could eventually play audio
again if it had an event listener that changes the media resource.
Each media element has a list of newly introduced cues, which must be initially empty. Whenever a text track cue is added to the list of cues of a text track that is in the list of text tracks for a media element, that cue must be added to the media element's list of newly introduced cues. Whenever a text track is added to the list of text tracks for a media element, all of the cues in that text track's list of cues must be added to the media element's list of newly introduced cues. When a media element's list of newly introduced cues has new cues added while the media element's show poster flag is not set, then the user agent must run the time marches on steps.
When a text track cue is removed from the list of cues of a text track that is in the list of text tracks for a media element, and whenever a text track is removed from the list of text tracks of a media element, if the media element's show poster flag is not set, then the user agent must run the time marches on steps.
When the current playback position of a media element changes (e.g. due to playback or seeking), the user agent must run the time marches on steps. If the current playback position changes while the steps are running, then the user agent must wait for the steps to complete, and then must immediately rerun the steps. (These steps are thus run as often as possible or needed — if one iteration takes a long time, this can cause certain cues to be skipped over as the user agent rushes ahead to "catch up".)
The time marches on steps are as follows:
Let current cues be a list of cues, initialised to contain all the cues of all the or showing text tracks of the media element (not the disabled ones) whose start times are less than or equal to the current playback position and whose end times are greater than the current playback position.
Let other cues be a list of cues, initialised to contain all the cues of and showing text tracks of the media element that are not present in current cues.
Let last time be the current playback position at the time this algorithm was last run for this media element, if this is not the first time it has run.
If the current playback position has, since the last time this algorithm was run, only changed through its usual monotonic increase during normal playback, then let missed cues be the list of cues in other cues whose start times are greater than or equal to last time and whose end times are less than or equal to the current playback position. Otherwise, let missed cues be an empty list.
Remove all the cues in missed cues that are also in the media element's list of newly introduced cues, and then empty the element's list of newly introduced cues.
If the time was reached through the usual monotonic increase of the current playback
position during normal playback, and if the user agent has not fired a timeupdate
event at the element in the past 15 to 250ms and
is not still running event handlers for such an event, then the user agent must queue a
task to fire a simple event named timeupdate
at the element. (In the other cases, such as
explicit seeks, relevant events get fired as part of the overall process of changing the
current playback position.)
The event thus is not to be fired faster than about 66Hz or slower than 4Hz (assuming the event handlers don't take longer than 250ms to run). User agents are encouraged to vary the frequency of the event based on the system load and the average cost of processing the event each time, so that the UI updates are not any more frequent than the user agent can comfortably handle while decoding the video.
If all of the cues in current cues have their text track cue active flag set, none of the cues in other cues have their text track cue active flag set, and missed cues is empty, then abort these steps.
If the time was reached through the usual monotonic increase of the current playback position during normal playback, and there are cues in other cues that have their text track cue pause-on-exit flag set and that either have their text track cue active flag set or are also in missed cues, then immediately pause the media element.
In the other cases, such as explicit seeks, playback is not paused by going past the end time of a cue, even if that cue has its text track cue pause-on-exit flag set.
Let events be a list of tasks, initially empty. Each task in this list will be associated with a text track, a text track cue, and a time, which are used to sort the list before the tasks are queued.
Let affected tracks be a list of text tracks, initially empty.
When the steps below say to prepare an event named event for a text track cue target with a time time, the user agent must run these substeps:
Let track be the text track with which the text track cue target is associated.
Create a task to fire a simple event named event at target.
Add the newly created task to events, associated with the time time, the text track track, and the text track cue target.
Add track to affected tracks.
For each text track cue in missed
cues, prepare an event named enter
for the
TextTrackCue
object with the text track cue start time.
For each text track cue in other
cues that either has its text track cue active flag set or is in missed cues, prepare an event named exit
for the TextTrackCue
object with the later of the
text track cue end time and the text track cue start time.
For each text track cue in current
cues that does not have its text track cue active flag set, prepare an
event named enter
for the TextTrackCue
object with the text track cue start time.
Sort the tasks in events in ascending time order (tasks with earlier times first).
Further sort tasks in events that have the same time by the relative text track cue order of the text track cues associated with these tasks.
Finally, sort tasks in events that have
the same time and same text track cue order by placing tasks that fire enter
events before
those that fire exit
events.
Sort affected tracks in the same order as the text tracks appear in the media element's list of text tracks, and remove duplicates.
For each text track in affected tracks, in the list
order, queue a task to fire a simple event named cuechange
at the TextTrack
object, and, if the
text track has a corresponding track
element, to then fire a
simple event named cuechange
at the
track
element as well.
Set the text track cue active flag of all the cues in the current cues, and unset the text track cue active flag of all the cues in the other cues.
Run the rules for updating the text track rendering of each of the text tracks in affected tracks that are showing. For example, for text tracks based on WebVTT, the rules for updating the display of WebVTT text tracks. [[!WEBVTT]]
For the purposes of the algorithm above, a text track cue is considered to be part of a text track only if it is listed in the text track list of cues, not merely if it is associated with the text track.
If the media element's node document stops being a fully active document, then the playback will stop until the document is active again.
When a media element is removed
from a Document
, the user agent must run the following steps:
Await a stable state, allowing the task that removed the media element from the
Document
to continue. The synchronous section consists of all the
remaining steps of this algorithm. (Steps in the synchronous section are marked with
⌛.)
⌛ If the media element is in a Document
,
abort these steps.
⌛ Run the internal pause steps for the media element.
seeking
Returns true if the user agent is currently seeking.
seekable
Returns a TimeRanges
object that represents the ranges of the media
resource to which it is possible for the user agent to seek.
fastSeek
( time )Seeks to near the given time as fast as possible, trading precision for
speed. (To seek to a precise time, use the currentTime
attribute.)
This does nothing if the media resource has not been loaded.
The seeking
attribute must initially have the
value false.
The fastSeek()
method must seek to the time given by the method's argument, with the
approximate-for-speed flag set.
When the user agent is required to seek to a particular new playback position in the media resource, optionally with the approximate-for-speed flag set, it means that the user agent must run the following steps. This algorithm interacts closely with the event loop mechanism; in particular, it has a synchronous section (which is triggered as part of the event loop algorithm). Steps in that section are marked with ⌛.
Set the media element's show poster flag to false.
If the media element's readyState
is HAVE_NOTHING
, abort these steps.
If the element's seeking
IDL attribute is true,
then another instance of this algorithm is already running. Abort that other instance of the
algorithm without waiting for the step that it is running to complete.
Set the seeking
IDL attribute to true.
If the seek was in response to a DOM method call or setting of an IDL attribute, then continue the script. The remainder of these steps must be run in parallel. With the exception of the steps marked with ⌛, they could be aborted at any time by another instance of this algorithm being invoked.
If the new playback position is later than the end of the media resource, then let it be the end of the media resource instead.
If the new playback position is less than the earliest possible position, let it be that position instead.
If the (possibly now changed) new playback position is not in one of
the ranges given in the seekable
attribute, then let it
be the position in one of the ranges given in the seekable
attribute that is the nearest to the new
playback position. If two positions both satisfy that constraint (i.e. the new playback position is exactly in the middle between two ranges in the seekable
attribute) then use the position that is closest to
the current playback position. If there are no ranges given in the seekable
attribute then set the seeking
IDL attribute to false and abort these steps.
If the approximate-for-speed flag is set, adjust the new playback position to a value that will allow for playback to resume promptly. If new playback position before this step is before current playback position, then the adjusted new playback position must also be before the current playback position. Similarly, if the new playback position before this step is after current playback position, then the adjusted new playback position must also be after the current playback position.
For example, the user agent could snap to a nearby key frame, so that it doesn't have to spend time decoding then discarding intermediate frames before resuming playback.
Queue a task to fire a simple event named seeking
at the element.
Set the current playback position to the new playback position.
If the media element was potentially playing
immediately before it started seeking, but seeking caused its readyState
attribute to change to a value lower than HAVE_FUTURE_DATA
, then a waiting
event will be
fired at the element.
This step sets the current playback position, and thus can immediately trigger other conditions, such as the rules regarding when playback "reaches the end of the media resource" (part of the logic that handles looping), even before the user agent is actually able to render the media data for that position (as determined in the next step).
The currentTime
attribute returns
the official playback position, not the current playback position, and
therefore gets updated before script execution, separate from this algorithm.
Wait until the user agent has established whether or not the media data for the new playback position is available, and, if it is, until it has decoded enough data to play back that position.
Await a stable state. The synchronous section consists of all the remaining steps of this algorithm. (Steps in the synchronous section are marked with ⌛.)
⌛ Set the seeking
IDL attribute to
false.
⌛ Run the time marches on steps.
⌛ Queue a task to fire a simple event
named timeupdate
at the element.
⌛ Queue a task to fire a simple event named seeked
at the element.
The seekable
attribute must return a new
static normalised TimeRanges
object that represents the ranges of the
media resource, if any, that the user agent is able to seek to, at the time the
attribute is evaluated.
If the user agent can seek to anywhere in the media resource, e.g.
because it is a simple movie file and the user agent and the server support HTTP Range requests,
then the attribute would return an object with one range, whose start is the time of the first
frame (the earliest possible position, typically zero), and whose end is the same as
the time of the first frame plus the duration
attribute's
value (which would equal the time of the last frame, and might be positive Infinity).
The range might be continuously changing, e.g. if the user agent is buffering a sliding window on an infinite stream. This is the behaviour seen with DVRs viewing live TV, for instance.
User agents should adopt adopt a very liberal and optimistic view of what is seekable. User agents should also buffer recent content where possible to enable seeking to be fast.
For instance, consider a large video file served on an HTTP server without support for HTTP Range requests. A browser could implement this by only buffering the current frame and data obtained for subsequent frames, never allow seeking, except for seeking to the very start by restarting the playback. However, this would be a poor implementation. A high quality implementation would buffer the last few minutes of content (or more, if sufficient storage space is available), allowing the user to jump back and rewatch something surprising without any latency, and would in addition allow arbitrary seeking by reloading the file from the start if necessary, which would be slower but still more convenient than having to literally restart the video and watch it all the way through just to get to an earlier unbuffered spot.
Media resources might be internally scripted or interactive. Thus, a media element could play in a non-linear fashion. If this happens, the user agent must act as if the algorithm for seeking was used whenever the current playback position changes in a discontinuous fashion (so that the relevant events fire). If the media element has a current media controller, then the user agent must seek the media controller appropriately instead.
A media resource can have multiple embedded audio and video tracks. For example, in addition to the primary video and audio tracks, a media resource could have foreign-language dubbed dialogues, director's commentaries, audio descriptions, alternative angles, or sign-language overlays.
audioTracks
Returns an AudioTrackList
object representing the audio tracks available in the
media resource.
videoTracks
Returns a VideoTrackList
object representing the video tracks available in the
media resource.
The audioTracks
attribute of a
media element must return a live AudioTrackList
object
representing the audio tracks available in the media element's media
resource.
The videoTracks
attribute of a
media element must return a live VideoTrackList
object
representing the video tracks available in the media element's media
resource.
There are only ever one AudioTrackList
object and one
VideoTrackList
object per media element, even if another media
resource is loaded into the element: the objects are reused. (The AudioTrack
and VideoTrack
objects are not, though.)
In this example, a script defines a function that takes a URL to a video and a reference to an element where the video is to be placed. That function then tries to load the video, and, once it is loaded, checks to see if there is a sign-language track available. If there is, it also displays that track. Both tracks are just placed in the given container; it's assumed that styles have been applied to make this work in a pretty way!
<script> function loadVideo(url, container) { var controller = new MediaController(); var video = document.createElement('video'); video.src = url; video.autoplay = true; video.controls = true; video.controller = controller; container.appendChild(video); video.onloadedmetadata = function (event) { for (var i = 0; i < video.videoTracks.length; i += 1) { if (video.videoTracks[i].kind == 'sign') { var sign = document.createElement('video'); sign.src = url + '#track=' + video.videoTracks[i].id; sign.autoplay = true; sign.controller = controller; container.appendChild(sign); return; } } }; } </script>
AudioTrackList
and VideoTrackList
objectsThe AudioTrackList
and VideoTrackList
interfaces are used by
attributes defined in the previous section.
interface AudioTrackList : EventTarget { readonly attribute unsigned long length; getter AudioTrack (unsigned long index); AudioTrack? getTrackById(DOMString id); attribute EventHandler onchange; attribute EventHandler onaddtrack; attribute EventHandler onremovetrack; }; interface AudioTrack { readonly attribute DOMString id; readonly attribute DOMString kind; readonly attribute DOMString label; readonly attribute DOMString language; attribute boolean enabled; }; interface VideoTrackList : EventTarget { readonly attribute unsigned long length; getter VideoTrack (unsigned long index); VideoTrack? getTrackById(DOMString id); readonly attribute long selectedIndex; attribute EventHandler onchange; attribute EventHandler onaddtrack; attribute EventHandler onremovetrack; }; interface VideoTrack { readonly attribute DOMString id; readonly attribute DOMString kind; readonly attribute DOMString label; readonly attribute DOMString language; attribute boolean selected; };
audioTracks
. length
videoTracks
. length
Returns the number of tracks in the list.
audioTracks
[index]videoTracks
[index]Returns the specified AudioTrack
or VideoTrack
object.
audioTracks
. getTrackById
( id )videoTracks
. getTrackById
( id )Returns the AudioTrack
or VideoTrack
object with the given identifier, or null if no track has that identifier.
id
id
Returns the ID of the given track. This is the ID that can be used with a fragment identifier
if the format supports the Media Fragments URI syntax, and that can be used with
the getTrackById()
method. [[!MEDIAFRAG]]
kind
kind
Returns the category the given track falls into. The possible track categories are given below.
label
label
Returns the label of the given track, if known, or the empty string otherwise.
language
language
Returns the language of the given track, if known, or the empty string otherwise.
enabled
[ = value ]Returns true if the given track is active, and false otherwise.
Can be set, to change whether the track is enabled or not. If multiple audio tracks are enabled simultaneously, they are mixed.
videoTracks
. selectedIndex
Returns the index of the currently selected track, if any, or −1 otherwise.
selected
[ = value ]Returns true if the given track is active, and false otherwise.
Can be set, to change whether the track is selected or not. Either zero or one video track is selected; selecting a new track while a previous one is selected will unselect the previous one.
An AudioTrackList
object represents a dynamic list of zero or more audio tracks,
of which zero or more can be enabled at a time. Each audio track is represented by an
AudioTrack
object.
A VideoTrackList
object represents a dynamic list of zero or more video tracks, of
which zero or one can be selected at a time. Each video track is represented by a
VideoTrack
object.
Tracks in AudioTrackList
and VideoTrackList
objects must be
consistently ordered. If the media resource is in a format that defines an order,
then that order must be used; otherwise, the order must be the relative order in which the tracks
are declared in the media resource. The order used is called the natural order
of the list.
Each track in one of these objects thus has an index; the first has the index 0, and each subsequent track is numbered one higher than the previous one. If a media resource dynamically adds or removes audio or video tracks, then the indices of the tracks will change dynamically. If the media resource changes entirely, then all the previous tracks will be removed and replaced with new tracks.
The AudioTrackList.length
and VideoTrackList.length
attributes must return
the number of tracks represented by their objects at the time of getting.
The supported property indices of AudioTrackList
and
VideoTrackList
objects at any instant are the numbers from zero to the number of
tracks represented by the respective object minus one, if any tracks are represented. If an
AudioTrackList
or VideoTrackList
object represents no tracks, it has no
supported property indices.
To determine the value of an indexed property for a given index index in an AudioTrackList
or VideoTrackList
object list, the user agent must return the AudioTrack
or
VideoTrack
object that represents the indexth track in list.
The AudioTrackList.getTrackById(id)
and VideoTrackList.getTrackById(id)
methods must return the first AudioTrack
or
VideoTrack
object (respectively) in the AudioTrackList
or
VideoTrackList
object (respectively) whose identifier is equal to the value of the
id argument (in the natural order of the list, as defined above). When no
tracks match the given argument, the methods must return null.
The AudioTrack
and VideoTrack
objects represent specific tracks of a
media resource. Each track can have an identifier, category, label, and language.
These aspects of a track are permanent for the lifetime of the track; even if a track is removed
from a media resource's AudioTrackList
or VideoTrackList
objects, those aspects do not change.
In addition, AudioTrack
objects can each be enabled or disabled; this is the audio
track's enabled state. When an AudioTrack
is created, its enabled state
must be set to false (disabled). The resource fetch
algorithm can override this.
Similarly, a single VideoTrack
object per VideoTrackList
object can
be selected, this is the video track's selection state. When a VideoTrack
is
created, its selection state must be set to false (not selected). The resource fetch algorithm can override this.
The AudioTrack.id
and VideoTrack.id
attributes must return the identifier
of the track, if it has one, or the empty string otherwise. If the media resource is
in a format that supports the Media Fragments URI fragment identifier syntax, the
identifier returned for a particular track must be the same identifier that would enable the track
if used as the name of a track in the track dimension of such a fragment identifier. [[!MEDIAFRAG]] [[!INBAND]]
For example, in Ogg files, this would be the Name header field of the track. [[!OGGSKELETONHEADERS]]
The AudioTrack.kind
and VideoTrack.kind
attributes must return the category
of the track, if it has one, or the empty string otherwise.
The category of a track is the string given in the first column of the table below that is the
most appropriate for the track based on the definitions in the table's second and third columns,
as determined by the metadata included in the track in the media resource. The cell
in the third column of a row says what the category given in the cell in the first column of that
row applies to; a category is only appropriate for an audio track if it applies to audio tracks,
and a category is only appropriate for video tracks if it applies to video tracks. Categories must
only be returned for AudioTrack
objects if they are appropriate for audio, and must
only be returned for VideoTrack
objects if they are appropriate for video.
For Ogg files, the Role header field of the track gives the relevant metadata. For DASH media
resources, the Role
element conveys the information. For WebM, only the
FlagDefault
element currently maps to a value. The Sourcing In-band
Media Resource Tracks from Media Containers into HTML specification has further details.
[[!OGGSKELETONHEADERS]] [[!DASH]] [[!WEBMCG]] [[!INBAND]]
Category | Definition | Applies to... | Examples |
---|---|---|---|
"alternative "
| A possible alternative to the main track, e.g. a different take of a song (audio), or a different angle (video). | Audio and video. | Ogg: "audio/alternate" or "video/alternate"; DASH: "alternate" without "main" and "commentary" roles, and, for audio, without the "dub" role (other roles ignored). |
"captions "
| A version of the main video track with captions burnt in. (For legacy content; new content would use text tracks.) | Video only. | DASH: "caption" and "main" roles together (other roles ignored). |
"descriptions "
| An audio description of a video track. | Audio only. | Ogg: "audio/audiodesc". |
"main "
| The primary audio or video track. | Audio and video. | Ogg: "audio/main" or "video/main"; WebM: the "FlagDefault" element is set; DASH: "main" role without "caption", "subtitle", and "dub" roles (other roles ignored). |
"main-desc "
| The primary audio track, mixed with audio descriptions. | Audio only. | AC3 audio in MPEG-2 TS: bsmod=2 and full_svc=1. |
"sign "
| A sign-language interpretation of an audio track. | Video only. | Ogg: "video/sign". |
"subtitles "
| A version of the main video track with subtitles burnt in. (For legacy content; new content would use text tracks.) | Video only. | DASH: "subtitle" and "main" roles together (other roles ignored). |
"translation "
| A translated version of the main audio track. | Audio only. | Ogg: "audio/dub". DASH: "dub" and "main" roles together (other roles ignored). |
"commentary "
| Commentary on the primary audio or video track, e.g. a director's commentary. | Audio and video. | DASH: "commentary" role without "main" role (other roles ignored). |
" " (empty string)
| No explicit kind, or the kind given by the track's metadata is not recognised by the user agent. | Audio and video. |
The AudioTrack.label
and VideoTrack.label
attributes must return the label
of the track, if it has one, or the empty string otherwise. [[!INBAND]]
The AudioTrack.language
and VideoTrack.language
attributes must return the
BCP 47 language tag of the language of the track, if it has one, or the empty string otherwise. If
the user agent is not able to express that language as a BCP 47 language tag (for example because
the language information in the media resource's format is a free-form string without
a defined interpretation), then the method must return the empty string, as if the track had no
language. [[!INBAND]]
The AudioTrack.enabled
attribute, on
getting, must return true if the track is currently enabled, and false otherwise. On setting, it
must enable the track if the new value is true, and disable it otherwise. (If the track is no
longer in an AudioTrackList
object, then the track being enabled or disabled has no
effect beyond changing the value of the attribute on the AudioTrack
object.)
Whenever an audio track in an AudioTrackList
that was
disabled is enabled, and whenever one that was enabled is disabled, the user agent must
queue a task to fire a simple event named change
at the AudioTrackList
object.
An audio track that has no data for a particular position on the media timeline, or that does not exist at that position, must be interpreted as being silent at that point on the timeline.
The VideoTrackList.selectedIndex
attribute
must return the index of the currently selected track, if any. If the VideoTrackList
object does not currently represent any tracks, or if none of the tracks are selected, it must
instead return −1.
The VideoTrack.selected
attribute, on
getting, must return true if the track is currently selected, and false otherwise. On setting, it
must select the track if the new value is true, and unselect it otherwise. If the track is in a
VideoTrackList
, then all the other VideoTrack
objects in that list must
be unselected. (If the track is no longer in a VideoTrackList
object, then the track
being selected or unselected has no effect beyond changing the value of the attribute on the
VideoTrack
object.)
Whenever a track in a VideoTrackList
that was previously
not selected is selected, and whenever the selected track in a VideoTrackList
is
unselected without a new track being selected in its stead, the user agent must queue a task to fire a simple
event named change
at the
VideoTrackList
object. This task must be queued before the task that fires
the resize
event, if any.
A video track that has no data for a particular position on the media timeline must be interpreted as being fully transparent black at that point on the timeline, with the same dimensions as the last frame before that position, or, if the position is before all the data for that track, the same dimensions as the first frame for that track. A track that does not exist at all at the current position must be treated as if it existed but had no data.
For instance, if a video has a track that is only introduced after one hour of playback, and the user selects that track then goes back to the start, then the user agent will act as if that track started at the start of the media resource but was simply transparent until one hour in.
The following are the event handlers (and their corresponding event handler event types) that must be supported, as event handler IDL attributes,
by all objects implementing the AudioTrackList
and VideoTrackList
interfaces:
Event handler | Event handler event type |
---|---|
onchange | change
|
onaddtrack | addtrack
|
onremovetrack | removetrack
|
The audioTracks
and videoTracks
attributes allow scripts to select which track
should play, but it is also possible to select specific tracks declaratively, by specifying
particular tracks in the fragment identifier of the URL of the media
resource. The format of the fragment identifier depends on the MIME type of
the media resource. [[!RFC2046]] [[!URL]]
In this example, a video that uses a format that supports the Media Fragments URI fragment identifier syntax is embedded in such a way that the alternative angles labeled "Alternative" are enabled instead of the default video track. [[!MEDIAFRAG]]
<video src="myvideo#track=Alternative"></video>
Each media element can have a MediaController
. A
MediaController
is an object that coordinates the playback of multiple media elements, for instance so that a sign-language interpreter
track can be overlaid on a video track, with the two being kept synchronised.
By default, a media element has no MediaController
. An implicit
MediaController
can be assigned using the mediagroup
content attribute. An explicit
MediaController
can be assigned directly using the controller
IDL attribute.
Media elements with a MediaController
are said
to be slaved to their controller. The MediaController
modifies the playback
rate and the playback volume of each of the media elements
slaved to it, and ensures that when any of its slaved media
elements unexpectedly stall, the others are stopped at the same time.
When a media element is slaved to a MediaController
, its playback
rate is fixed to that of the other tracks in the same MediaController
, and any
looping is disabled.
enum MediaControllerPlaybackState { "waiting", "playing", "ended" }; [Constructor] interface MediaController : EventTarget { readonly attribute unsigned short readyState; // uses HTMLMediaElement.readyState's values readonly attribute TimeRanges buffered; readonly attribute TimeRanges seekable; readonly attribute unrestricted double duration; attribute double currentTime; readonly attribute boolean paused; readonly attribute MediaControllerPlaybackState playbackState; readonly attribute TimeRanges played; void pause(); void unpause(); void play(); // calls play() on all media elements as well attribute double defaultPlaybackRate; attribute double playbackRate; attribute double volume; attribute boolean muted; attribute EventHandler onemptied; attribute EventHandler onloadedmetadata; attribute EventHandler onloadeddata; attribute EventHandler oncanplay; attribute EventHandler oncanplaythrough; attribute EventHandler onplaying; attribute EventHandler onended; attribute EventHandler onwaiting; attribute EventHandler ondurationchange; attribute EventHandler ontimeupdate; attribute EventHandler onplay; attribute EventHandler onpause; attribute EventHandler onratechange; attribute EventHandler onvolumechange; };
MediaController
()Returns a new MediaController
object.
controller
[ = controller ]Returns the current MediaController
for the media element, if any;
returns null otherwise.
Can be set, to set an explicit MediaController
. Doing so removes the mediagroup
attribute, if any.
readyState
Returns the state that the MediaController
was in the last time it fired events
as a result of reporting the controller state.
The values of this attribute are the same as for the readyState
attribute of media
elements.
buffered
Returns a TimeRanges
object that represents the intersection of the time ranges
for which the user agent has all relevant media data for all the slaved media elements.
seekable
Returns a TimeRanges
object that represents the intersection of the time ranges
into which the user agent can seek for all the slaved media
elements.
duration
Returns the difference between the earliest playable moment and the latest playable moment (not considering whether the data in question is actually buffered or directly seekable, but not including time in the future for infinite streams). Will return zero if there is no media.
currentTime
[ = value ]Returns the current playback position, in seconds, as a position between zero
time and the current duration
.
Can be set, to seek to the given time.
paused
Returns true if playback is paused; false otherwise. When this attribute is true, any media element slaved to this controller will be stopped.
playbackState
Returns the state that the MediaController
was in the last time it fired events
as a result of reporting the controller state.
The value of this attribute is either "playing
", indicating that the media is actively
playing, "ended
", indicating that the media is
not playing because playback has reached the end of all the slaved media elements,
or "waiting
", indicating that the media is not
playing for some other reason (e.g. the MediaController
is paused).
pause
()Sets the paused
attribute to true.
unpause
()Sets the paused
attribute to false.
play
()Sets the paused
attribute to false and
invokes the play()
method of each slaved media element.
played
Returns a TimeRanges
object that represents the union of the time ranges in all
the slaved media elements that have been played.
defaultPlaybackRate
[ = value ]Returns the default rate of playback.
Can be set, to change the default rate of playback.
This default rate has no direct effect on playback, but if the user switches to a
fast-forward mode, when they return to the normal playback mode, it is expected that rate of
playback (playbackRate
) will be returned
to this default rate.
playbackRate
[ = value ]Returns the current rate of playback.
Can be set, to change the rate of playback.
volume
[ = value ]Returns the current playback volume multiplier, as a number in the range 0.0 to 1.0, where 0.0 is the quietest and 1.0 the loudest.
Can be set, to change the volume multiplier.
Throws an IndexSizeError
exception if the new value is not in the range 0.0 .. 1.0.
muted
[ = value ]Returns true if all audio is muted (regardless of other attributes either on the controller or on any media elements slaved to this controller), and false otherwise.
Can be set, to change whether the audio is muted or not.
A media element can have a current media controller, which is a
MediaController
object. When a media element is created without a mediagroup
attribute, it does not have a current media
controller. (If it is created with such an attribute, then that attribute
initialises the current media controller, as defined below.)
The slaved media elements of a MediaController
are the media elements whose current media controller is that
MediaController
. All the slaved media elements of a
MediaController
must use the same clock for their definition of their media
timeline's unit time. When the user agent is required to act on each slaved media element in turn, they must be processed in the order that they
were last associated with the MediaController
.
The controller
attribute on a media
element, on getting, must return the element's current media controller, if
any, or null otherwise. On setting, the user agent must run the following steps:
Let m be the media element in question.
Let old controller be m's current media controller, if it currently has one, and null otherwise.
Let new controller be null.
Let m have no current media controller, if it currently has one.
Remove the element's mediagroup
content
attribute, if any.
If the new value is null, then jump to the update controllers step below.
Let m's current media controller be the new value.
Let new controller be m's current media controller.
Bring the media element up to speed with its new media controller.
Update controllers: If old controller and new controller are the same (whether both null or both the same controller) then abort these steps.
If old controller is not null and still has one or more slaved media elements, then report the controller state for old controller.
If new controller is not null, then report the controller state for new controller.
The MediaController()
constructor, when
invoked, must return a newly created MediaController
object.
The readyState
attribute must
return the value to which it was most recently set. When the MediaController
object
is created, the attribute must be set to the value 0 (HAVE_NOTHING
). The value is updated by the report the
controller state algorithm below.
The seekable
attribute must return
a new static normalised TimeRanges
object that represents the
intersection of the ranges of the media resources of the
slaved media elements that the user agent is able to seek to, at the time the
attribute is evaluated.
The buffered
attribute must return
a new static normalised TimeRanges
object that represents the
intersection of the ranges of the media resources of the
slaved media elements that the user agent has buffered, at the time the attribute is
evaluated. Users agents must accurately determine the ranges available, even for media streams
where this can only be determined by tedious inspection.
The duration
attribute must return
the media controller duration.
Every 15 to 250ms, or whenever the MediaController
's media controller
duration changes, whichever happens least often, the user agent must queue a
task to fire a simple event named durationchange
at the
MediaController
. If the MediaController
's media controller
duration decreases such that the media controller position is greater than the
media controller duration, the user agent must immediately seek the media
controller to media controller duration.
The currentTime
attribute must
return the media controller position on getting, and on setting must seek the
media controller to the new value.
Every 15 to 250ms, or whenever the MediaController
's media controller
position changes, whichever happens least often, the user agent must queue a
task to fire a simple event named timeupdate
at the
MediaController
.
When a MediaController
is created it is a playing media controller. It
can be changed into a paused media controller and back either via the user agent's user
interface (when the element is exposing a user
interface to the user) or by script using the APIs defined in this section (see below).
The paused
attribute must return
true if the MediaController
object is a paused media controller, and
false otherwise.
When the pause()
method is invoked,
if the MediaController
is a playing media controller then the user agent
must change the MediaController
into a paused media controller,
queue a task to fire a simple event named pause
at the MediaController
, and then
report the controller state of the MediaController
.
When the unpause()
method is
invoked, if the MediaController
is a paused media controller, the user
agent must change the MediaController
into a playing media controller,
queue a task to fire a simple event named play
at the MediaController
, and then
report the controller state of the MediaController
.
When the play()
method is invoked, the
user agent must invoke the play()
method of each slaved media element in turn, and then invoke the unpause
method of the MediaController
.
The playbackState
attribute
must return the value to which it was most recently set. When the MediaController
object is created, the attribute must be set to the value "waiting
". The value is updated by the report the
controller state algorithm below.
The played
attribute must return a
new static normalised TimeRanges
object that represents the union of the
ranges of points on the media timelines of the media resources of the slaved media elements that the
user agent has so far reached through the usual monotonic increase of their current playback positions during normal playback, at the time the
attribute is evaluated.
A MediaController
has a media controller default playback rate and a
media controller playback rate, which must both be set to 1.0 when the
MediaController
object is created.
The defaultPlaybackRate
attribute, on getting, must return the MediaController
's media controller
default playback rate, and on setting, must set the MediaController
's
media controller default playback rate to the new value, then queue a
task to fire a simple event named ratechange
at the
MediaController
.
The playbackRate
attribute, on
getting, must return the MediaController
's media controller playback
rate, and on setting, must set the MediaController
's media controller
playback rate to the new value, then queue a task to fire a simple
event named ratechange
at the
MediaController
.
A MediaController
has a media controller volume multiplier, which must
be set to 1.0 when the MediaController
object is created, and a media controller
mute override, much must initially be false.
The volume
attribute, on getting,
must return the MediaController
's media controller volume multiplier,
and on setting, if the new value is in the range 0.0 to 1.0 inclusive, must set the
MediaController
's media controller volume multiplier to the new value
and queue a task to fire a simple event named volumechange
at the
MediaController
. If the new value is outside the range 0.0 to 1.0 inclusive, then, on
setting, an IndexSizeError
exception must be thrown instead.
The muted
attribute, on getting, must
return the MediaController
's media controller mute override, and on
setting, must set the MediaController
's media controller mute override
to the new value and queue a task to fire a simple event named volumechange
at the
MediaController
.
The media resources of all the slaved media
elements of a MediaController
have a defined temporal relationship which
provides relative offsets between the zero time of each such media resource: for
media resources with a timeline offset, their
relative offsets are the difference between their timeline offset; the zero times of
all the media resources without a timeline offset
are not offset from each other (i.e. the origins of their timelines are cotemporal); and finally,
the zero time of the media resource with the earliest timeline offset
(if any) is not offset from the zero times of the media
resources without a timeline offset (i.e. the origins of media resources without a timeline offset are further cotemporal
with the earliest defined point on the timeline of the media resource with the
earliest timeline offset).
The media resource end position of a media resource in a media element is defined as follows: if the media resource has a finite and known duration, the media resource end position is the duration of the media resource's timeline (the last defined position on that timeline); otherwise, the media resource's duration is infinite or unknown, and the media resource end position is the time of the last frame of media data currently available for that media resource.
Each MediaController
also has its own defined timeline. On this timeline, all the
media resources of all the slaved media elements
of the MediaController
are temporally aligned according to their defined offsets. The
media controller duration of that MediaController
is the time from the
earliest earliest possible position, relative to this MediaController
timeline, of any of the media resources of the slaved
media elements of the MediaController
, to the time of the latest media
resource end position of the media resources of the
slaved media elements of the MediaController
, again relative to this
MediaController
timeline.
Each MediaController
has a media controller position. This is the time
on the MediaController
's timeline at which the user agent is trying to play the
slaved media elements. When a MediaController
is created, its
media controller position is initially zero.
When the user agent is to bring a media element up to speed with its new media controller, it must seek that media element to the
MediaController
's media controller position relative to the media
element's timeline.
When the user agent is to seek the media controller to a particular new playback position, it must follow these steps:
If the new playback position is less than zero, then set it to zero.
If the new playback position is greater than the media controller duration, then set it to the media controller duration.
Set the media controller position to the new playback position.
Seek each slaved media element to the new playback position relative to the media element timeline.
A MediaController
is a restrained media controller if the
MediaController
is a playing media controller, but either at least one
of its slaved media elements whose autoplaying flag is true still has
its paused
attribute set to true, or, all of its
slaved media elements have their paused
attribute set to true.
A MediaController
is a blocked media controller if the
MediaController
is a paused media controller, or if any of its
slaved media elements are blocked media
elements, or if any of its slaved media elements whose autoplaying
flag is true still have their paused
attribute set to
true, or if all of its slaved media elements have their paused
attribute set to true.
A media element is blocked on its media controller if the
MediaController
is a blocked media controller, or if its media
controller position is either before the media resource's earliest
possible position relative to the MediaController
's timeline or after the end
of the media resource relative to the MediaController
's timeline.
When a MediaController
is not a blocked media
controller and it has at least one slaved media
element whose Document
is a fully active Document
,
the MediaController
's media controller position must increase
monotonically at media controller playback rate units of time on the
MediaController
's timeline per unit time of the clock used by its slaved media
elements.
When the zero point on the timeline of a MediaController
moves relative to the
timelines of the slaved media elements by a time difference ΔT, the MediaController
's media controller
position must be decremented by ΔT.
In some situations, e.g. when playing back a live stream without buffering anything, the media controller position would increase monotonically as described above at the same rate as the ΔT described in the previous paragraph decreases it, with the end result that for all intents and purposes, the media controller position would appear to remain constant (probably with the value 0).
A MediaController
has a most recently reported readiness state, which
is a number from 0 to 4 derived from the numbers used for the media element readyState
attribute, and a most recently reported
playback state, which is either playing, waiting, or ended.
When a MediaController
is created, its most recently reported readiness
state must be set to 0, and its most recently reported playback state must be
set to waiting.
When a user agent is required to report the controller state for a
MediaController
, the user agent must run the following steps:
If the MediaController
has no slaved media elements, let new readiness state be 0.
Otherwise, let it have the lowest value of the readyState
IDL attributes of all of its slaved media
elements.
If the MediaController
's most recently reported readiness state is
less than the new readiness state, then run these substeps:
Let next state be the MediaController
's most
recently reported readiness state.
Loop: Increment next state by one.
Queue a task to run the following steps:
Set the MediaController
's readyState
attribute to the value next state.
Fire a simple event at the MediaController
object, whose
name is the event name corresponding to the value of next state given in
the table below.
If next state is less than new readiness state, then return to the step labeled loop.
Otherwise, if the MediaController
's most recently reported readiness
state is greater than new readiness state then queue a
task to fire a simple event at the MediaController
object,
whose name is the event name corresponding to the value of new readiness
state given in the table below.
Value of new readiness state | Event name |
---|---|
0 | emptied
|
1 | loadedmetadata
|
2 | loadeddata
|
3 | canplay
|
4 | canplaythrough
|
Let the MediaController
's most recently reported readiness state
be new readiness state.
Initialise new playback state by setting it to the state given for the first matching condition from the following list:
MediaController
has no slaved media elementsMediaController
's slaved media elements have
ended playback and the media controller playback rate is positive or
zeroMediaController
is a blocked media controllerIf the MediaController
's most recently reported playback state
is not equal to new playback state and the new playback
state is ended, then queue a task that, if the
MediaController
object is a playing media controller, and all of the
MediaController
's slaved media elements have still ended
playback, and the media controller playback rate is still positive or zero,
changes the MediaController
object to a paused media controller and
then fires a simple event named pause
at the MediaController
object.
If the MediaController
's most recently reported playback state is
not equal to new playback state then queue a task to run the
following steps:
Set the MediaController
's playbackState
attribute to the value given in
the second column of the row of the following table whose first column contains the new playback state.
Fire a simple event at the MediaController
object, whose name
is the value given in the third column of the row of the following table whose first column
contains the new playback state.
New playback state | New value for playbackState
| Event name |
---|---|---|
playing | "playing "
| playing
|
waiting | "waiting "
| waiting
|
ended | "ended "
| ended
|
Let the MediaController
's most recently reported playback state
be new playback state.
The following are the event handlers (and their corresponding event handler event types) that must be supported, as event handler IDL attributes,
by all objects implementing the MediaController
interface:
Event handler | Event handler event type |
---|---|
onemptied | emptied
|
onloadedmetadata | loadedmetadata
|
onloadeddata | loadeddata
|
oncanplay | canplay
|
oncanplaythrough | canplaythrough
|
onplaying | playing
|
onended | ended
|
onwaiting | waiting
|
ondurationchange | durationchange
|
ontimeupdate | timeupdate
|
onplay | play
|
onpause | pause
|
onratechange | ratechange
|
onvolumechange | volumechange
|
The task source for the tasks listed in this section is the DOM manipulation task source.
The mediagroup
content attribute on media elements can be used to link multiple media elements together by implicitly creating a MediaController
. The
value is text; media elements with the same value are
automatically linked by the user agent.
When a media element is created with a mediagroup
attribute, and when a media element's
mediagroup
attribute is set, changed, or removed, the
user agent must run the following steps:
Let m be the media element in question.
Let old controller be m's current media controller, if it currently has one, and null otherwise.
Let new controller be null.
Let m have no current media controller, if it currently has one.
If m's mediagroup
attribute
is being removed, then jump to the update controllers step below.
If there is another media element whose Document
is the same as
m's node document (even if one or both of these elements are not
actually in the Document
), and which
also has a mediagroup
attribute, and whose mediagroup
attribute has the same value as the new value of
m's mediagroup
attribute, then
let controller be that media element's current media
controller.
Otherwise, let controller be a newly created
MediaController
.
Let m's current media controller be controller.
Let new controller be m's current media controller.
Bring the media element up to speed with its new media controller.
Update controllers: If old controller and new controller are the same (whether both null or both the same controller) then abort these steps.
If old controller is not null and still has one or more slaved media elements, then report the controller state for old controller.
If new controller is not null, then report the controller state for new controller.
The mediaGroup
IDL attribute on media elements must reflect the mediagroup
content attribute.
Multiple media elements referencing the same media
resource will share a single network request. This can be used to efficiently play two
(video) tracks from the same media resource in two different places on the screen.
Used with the mediagroup
attribute, these elements can
also be kept synchronised.
In this example, a sign-languge interpreter track from a movie file is overlaid on the primary
video track of that same video file using two video
elements, some CSS, and an
implicit MediaController
:
<article> <style scoped> div { margin: 1em auto; position: relative; width: 400px; height: 300px; } video { position; absolute; bottom: 0; right: 0; } video:first-child { width: 100%; height: 100%; } video:last-child { width: 30%; } </style> <div> <video src="movie.vid#track=Video&track=English" autoplay controls mediagroup=movie></video> <video src="movie.vid#track=sign" autoplay mediagroup=movie></video> </div> </article>
A media element can have a group of associated text tracks, known as the media element's list of text tracks. The text tracks are sorted as follows:
track
element
children of the media element, in tree order.addTextTrack()
method, in the order they were added, oldest
first.A text track consists of:
This decides how the track is handled by the user agent. The kind is represented by a string. The possible strings are:
subtitles
captions
descriptions
chapters
metadata
The kind of track can change dynamically, in the case of
a text track corresponding to a track
element.
This is a human-readable string intended to identify the track for the user.
The label of a track can change dynamically, in the
case of a text track corresponding to a track
element.
When a text track label is the empty string, the user agent should automatically generate an appropriate label from the text track's other properties (e.g. the kind of text track and the text track's language) for use in its user interface. This automatically-generated label is not exposed in the API.
This is a string extracted from the media resource specifically for in-band metadata tracks to enable such tracks to be dispatched to different scripts in the document.
For example, a traditional TV station broadcast streamed on the Web and augmented with Web-specific interactive features could include text tracks with metadata for ad targeting, trivia game data during game shows, player states during sports games, recipe information during food programs, and so forth. As each program starts and ends, new tracks might be added or removed from the stream, and as each one is added, the user agent could bind them to dedicated script modules using the value of this attribute.
Other than for in-band metadata text tracks, the in-band metadata track dispatch type is the empty string. How this value is populated for different media formats is described in steps to expose a media-resource-specific text track.
This is a string (a BCP 47 language tag) representing the language of the text track's cues. [[!BCP47]]
The language of a text track can change dynamically,
in the case of a text track corresponding to a track
element.
One of the following:
Indicates that the text track's cues have not been obtained.
Indicates that the text track is loading and there have been no fatal errors encountered so far. Further cues might still be added to the track by the parser.
Indicates that the text track has been loaded with no fatal errors.
Indicates that the text track was enabled, but when the user agent attempted to obtain it, this failed in some way (e.g. URL could not be resolved, network error, unknown text track format). Some or all of the cues are likely missing and will not be obtained.
The readiness state of a text track changes dynamically as the track is obtained.
One of the following:
Indicates that the text track is not active. Other than for the purposes of exposing the track in the DOM, the user agent is ignoring the text track. No cues are active, no events are fired, and the user agent will not attempt to obtain the track's cues.
Indicates that the text track is active, but that the user agent is not actively displaying the cues. If no attempt has yet been made to obtain the track's cues, the user agent will perform such an attempt momentarily. The user agent is maintaining a list of which cues are active, and events are being fired accordingly.
Indicates that the text track is active. If no attempt has yet been made to obtain the
track's cues, the user agent will perform such an attempt momentarily. The user agent is
maintaining a list of which cues are active, and events are being fired accordingly. In
addition, for text tracks whose kind is subtitles
or captions
, the cues are being overlaid on the video
as appropriate; for text tracks whose kind is descriptions
, the user agent is making the
cues available to the user in a non-visual fashion; and for text tracks whose kind is chapters
, the user agent is making available to
the user a mechanism by which the user can navigate to any point in the media
resource by selecting a cue.
A list of text track cues, along with rules for updating the text track rendering. For example, for WebVTT, the rules for updating the display of WebVTT text tracks. [[!WEBVTT]]
The list of cues of a text track can change dynamically, either because the text track has not yet been loaded or is still loading, or due to DOM manipulation.
Each text track has a corresponding TextTrack
object.
Each media element has a list of pending text tracks, which must initially be empty, a blocked-on-parser flag, which must initially be false, and a did-perform-automatic-track-selection flag, which must also initially be false.
When the user agent is required to populate the list of pending text tracks of a media element, the user agent must add to the element's list of pending text tracks each text track in the element's list of text tracks whose text track mode is not disabled and whose text track readiness state is loading.
Whenever a track
element's parent node changes, the user agent must remove the
corresponding text track from any list of pending text tracks that it is
in.
Whenever a text track's text track readiness state changes to either loaded or failed to load, the user agent must remove it from any list of pending text tracks that it is in.
When a media element is created by an HTML parser or XML parser, the user agent must set the element's blocked-on-parser flag to true. When a media element is popped off the stack of open elements of an HTML parser or XML parser, the user agent must honor user preferences for automatic text track selection, populate the list of pending text tracks, and set the element's blocked-on-parser flag to false.
The text tracks of a media element are ready when both the element's list of pending text tracks is empty and the element's blocked-on-parser flag is false.
Each media element has a pending text track change notification flag, which must initially be unset.
Whenever a text track that is in a media element's list of text tracks has its text track mode change value, the user agent must run the following steps for the media element:
If the media element's pending text track change notification flag is set, abort these steps.
Set the media element's pending text track change notification flag.
Queue a task that runs the following substeps:
Unset the media element's pending text track change notification flag.
Fire a simple event named change
at
the media element's textTracks
attribute's TextTrackList
object.
If the media element's show poster flag is not set, run the time marches on steps.
The task source for the tasks listed in this section is the DOM manipulation task source.
A text track cue is the unit of time-sensitive data in a text track, corresponding for instance for subtitles and captions to the text that appears at a particular time and disappears at another time.
Each text track cue consists of:
An arbitrary string.
The time, in seconds and fractions of a second, that describes the beginning of the range of the media data to which the cue applies.
The time, in seconds and fractions of a second, that describes the end of the range of the media data to which the cue applies.
A boolean indicating whether playback of the media resource is to pause when the end of the range to which the cue applies is reached.
Additional fields, as needed for the format, including the actual data of the cue. For example, WebVTT has a text track cue writing direction and so forth. [[!WEBVTT]]
The raw data of the cue, and rules for rendering the cue in isolation.
The precise nature of this data is defined by the format. For example, WebVTT uses text.
An algorithm which, when applied to the cue, returns a string that can be used in user interfaces that use the cue as a chapter title.
The text track cue start time and text track cue end time can be negative. (The current playback position can never be negative, though, so cues entirely before time zero cannot be active.)
Each text track cue has a corresponding TextTrackCue
object (or more
specifically, an object that inherits from TextTrackCue
— for example, WebVTT
cues use the VTTCue
interface). A text track cue's in-memory
representation can be dynamically changed through this TextTrackCue
API. [[!WEBVTT]]
A text track cue is associated with rules for updating the text track
rendering, as defined by the specification for the specific kind of text track
cue. These rules are used specifically when the object representing the cue is added to a
TextTrack
object using the addCue()
method.
In addition, each text track cue has two pieces of dynamic information:
This flag must be initially unset. The flag is used to ensure events are fired appropriately when the cue becomes active or inactive, and to make sure the right cues are rendered.
The user agent must synchronously unset this flag whenever the text track cue is
removed from its text track's text track list of cues; whenever the
text track itself is removed from its media element's list of
text tracks or has its text track mode changed to disabled; and whenever the media element's readyState
is changed back to HAVE_NOTHING
. When the flag is unset in this way for one
or more cues in text tracks that were showing prior to the relevant incident, the user agent must, after having unset
the flag for all the affected cues, apply the rules for updating the text track
rendering of those text tracks. For example, for text tracks based on WebVTT, the rules for updating the display
of WebVTT text tracks. [[!WEBVTT]]
This is used as part of the rendering model, to keep cues in a consistent position. It must initially be empty. Whenever the text track cue active flag is unset, the user agent must empty the text track cue display state.
The text track cues of a media element's text tracks are ordered relative to each other in the text track cue order, which is determined as follows: first group the cues by their text track, with the groups being sorted in the same order as their text tracks appear in the media element's list of text tracks; then, within each group, cues must be sorted by their start time, earliest first; then, any cues with the same start time must be sorted by their end time, latest first; and finally, any cues with identical end times must be sorted in the order they were last added to their respective text track list of cues, oldest first (so e.g. for cues from a WebVTT file, that would initially be the order in which the cues were listed in the file). [[!WEBVTT]]
A media-resource-specific text track is a text track that corresponds to data found in the media resource.
Rules for processing and rendering such data are defined by the relevant specifications, e.g. the specification of the video format if the media resource is a video. Details for some legacy formats can be found in the Sourcing In-band Media Resource Tracks from Media Containers into HTML specification. [[!INBAND]]
When a media resource contains data that the user agent recognises and supports as being equivalent to a text track, the user agent runs the steps to expose a media-resource-specific text track with the relevant data, as follows.
Associate the relevant data with a new text track and its corresponding new
TextTrack
object. The text track is a media-resource-specific
text track.
Set the new text track's kind, label, and language based on the semantics of the relevant data, as defined by the relevant specification. If there is no label in that data, then the label must be set to the empty string.
Associate the text track list of cues with the rules for updating the text track rendering appropriate for the format in question.
If the new text track's kind is metadata
, then set the text track in-band
metadata track dispatch type as follows, based on the type of the media
resource:
CodecID
element. [[!WEBMCG]]stsd
box of the
first stbl
box of the
first minf
box of the
first mdia
box of the
text track's trak
box in the
first moov
box
of the file be the stsd box, if any.
If the file has no stsd box, or if the stsd box has neither a mett
box nor a metx
box, then the text track
in-band metadata track dispatch type must be set to the empty string.
Otherwise, if the stsd box has a mett
box then the text
track in-band metadata track dispatch type must be set to the concatenation of the
string "mett
", a U+0020 SPACE character, and the value of the first mime_format
field of the first mett
box of the stsd
box, or the empty string if that field is absent in that box.
Otherwise, if the stsd box has no mett
box but has a metx
box then the text track in-band metadata track dispatch type
must be set to the concatenation of the string "metx
", a U+0020 SPACE
character, and the value of the first namespace
field of the first metx
box of the stsd box, or the empty string if that field is absent in
that box.
[[!MPEG4]]
Populate the new text track's list of cues with the cues parsed so far, following the guidelines for exposing cues, and begin updating it dynamically as necessary.
Set the new text track's readiness state to loaded.
Set the new text track's mode to the mode consistent with the user's preferences and the requirements of the relevant specification for the data.
For instance, if there are no other active subtitles, and this is a forced subtitle track (a subtitle track giving subtitles in the audio track's primary language, but only for audio that is actually in another language), then those subtitles might be activated here.
Add the new text track to the media element's list of text tracks.
Fire a trusted event with the name addtrack
, that does not bubble and is not cancelable, and that uses
the TrackEvent
interface, with the track
attribute initialised to the text track's TextTrack
object, at the
media element's textTracks
attribute's
TextTrackList
object.
When a track
element is created, it must be associated with a new text
track (with its value set as defined below) and its corresponding new
TextTrack
object.
The text track kind is determined from the state of the element's kind
attribute according to the following table; for a state given
in a cell of the first column, the kind is the string given
in the second column:
State | String |
---|---|
Subtitles | subtitles
|
Captions | captions
|
Descriptions | descriptions
|
Chapters | chapters
|
Metadata | metadata
|
The text track label is the element's track label.
The text track language is the element's track language, if any, or the empty string otherwise.
As the kind
, label
,
and srclang
attributes are set, changed, or removed, the
text track must update accordingly, as per the definitions above.
Changes to the track URL are handled in the algorithm below.
The text track readiness state is initially not loaded, and the text track mode is initially disabled.
The text track list of cues is initially empty. It is dynamically modified when the referenced file is parsed. Associated with the list are the rules for updating the text track rendering appropriate for the format in question; for WebVTT, this is the rules for updating the display of WebVTT text tracks. [[!WEBVTT]]
When a track
element's parent element changes and the new parent is a media
element, then the user agent must add the track
element's corresponding
text track to the media element's list of text tracks, and
then queue a task to fire a trusted event with the name addtrack
, that does not bubble and is not cancelable, and that uses
the TrackEvent
interface, with the track
attribute initialised to the text track's TextTrack
object, at the
media element's textTracks
attribute's
TextTrackList
object.
When a track
element's parent element changes and the old parent was a media
element, then the user agent must remove the track
element's corresponding
text track from the media element's list of text tracks,
and then queue a task to fire a trusted event with the name removetrack
, that does not bubble and is not cancelable, and that
uses the TrackEvent
interface, with the track
attribute initialised to the text track's
TextTrack
object, at the media element's textTracks
attribute's TextTrackList
object.
When a text track corresponding to a track
element is added to a
media element's list of text tracks, the user agent must queue a
task to run the following steps for the media element:
If the element's blocked-on-parser flag is true, abort these steps.
If the element's did-perform-automatic-track-selection flag is true, abort these steps.
Honor user preferences for automatic text track selection for this element.
When the user agent is required to honor user preferences for automatic text track selection for a media element, the user agent must run the following steps:
Perform automatic text track selection for subtitles
and captions
.
If there are any text tracks in the media
element's list of text tracks whose text track kind is metadata
that correspond to track
elements with a default
attribute set whose text
track mode is set to disabled, then set the
text track mode of all such tracks to
Set the element's did-perform-automatic-track-selection flag to true.
When the steps above say to perform automatic text track selection for one or more text track kinds, it means to run the following steps:
Let candidates be a list consisting of the text tracks in the media element's list of text tracks whose text track kind is one of the kinds that were passed to the algorithm, if any, in the order given in the list of text tracks.
If candidates is empty, then abort these steps.
If any of the text tracks in candidates have a text track mode set to showing, abort these steps.
If the user has expressed an interest in having a track from candidates enabled based on its text track kind, text track language, and text track label, then set its text track mode to showing.
For example, the user could have set a browser preference to the effect of "I want French captions whenever possible", or "If there is a subtitle track with 'Commentary' in the title, enable it", or "If there are audio description tracks available, enable one, ideally in Swiss German, but failing that in Standard Swiss German or Standard German".
Otherwise, if there are any text tracks in candidates that correspond to track
elements with a default
attribute set whose text track mode is
set to disabled, then set the text track
mode of the first such track to showing.
When a text track corresponding to a track
element experiences any of
the following circumstances, the user agent must start the track
processing
model for that text track and its track
element:
track
element is created.track
element's parent element changes and the new parent is a media
element.When a user agent is to start the track
processing model for a
text track and its track
element, it must run the following algorithm.
This algorithm interacts closely with the event loop mechanism; in particular, it has
a synchronous section (which is triggered as part of the event loop
algorithm). The steps in that section are marked with ⌛.
If another occurrence of this algorithm is already running for this text
track and its track
element, abort these steps, letting that other algorithm
take care of this element.
If the text track's text track mode is not set to one of or showing, abort these steps.
If the text track's track
element does not have a media
element as a parent, abort these steps.
Run the remainder of these steps in parallel, allowing whatever caused these steps to run to continue.
Top: Await a stable state. The synchronous section consists of the following steps. (The steps in the synchronous section are marked with ⌛.)
⌛ Set the text track readiness state to loading.
⌛ If the track
element's parent is a media element then
let CORS mode be the state of the parent media element's crossorigin
content attribute. Otherwise, let CORS mode be No CORS.
End the synchronous section, continuing the remaining steps in parallel.
If URL is not the empty string, perform a potentially CORS-enabled
fetch of URL, with the mode being CORS mode, the origin being the origin of the
track
element's node document, and the default origin behaviour set
to fail.
The resource obtained in this fashion, if any, contains the text track data. If any data is obtained, it is by definition CORS-same-origin (cross-origin resources that are not suitably CORS-enabled do not get this far).
The tasks queued by the fetching algorithm on the networking task source to process the data as it is being fetched must determine the type of the resource. If the type of the resource is not a supported text track format, the load will fail, as described below. Otherwise, the resource's data must be passed to the appropriate parser (e.g. the WebVTT parser) as it is received, with the text track list of cues being used for that parser's output. [[!WEBVTT]]
The appropriate parser will incrementally update the text track list of cues during these networking task source tasks, as each such task is run with whatever data has been received from the network).
This specification does not currently say whether or how to check the MIME types of text tracks, or whether or how to perform file type sniffing using the actual file data. Implementors differ in their intentions on this matter and it is therefore unclear what the right solution is. In the absence of any requirement here, the HTTP specification's strict requirement to follow the Content-Type header prevails ("Content-Type specifies the media type of the underlying data." ... "If and only if the media type is not given by a Content-Type field, the recipient MAY attempt to guess the media type via inspection of its content and/or the name extension(s) of the URI used to identify the resource.").
If the fetching algorithm fails for any reason (network error,
the server returns an error code, a cross-origin check fails, etc), or if URL is the empty string, then queue a task to first change the
text track readiness state to failed to
load and then fire a simple event named error
at the track
element. This task must use the DOM manipulation task source.
If the fetching algorithm does not fail, but the
type of the resource is not a supported text track format, or the file was not successfully
processed (e.g. the format in question is an XML format and the file contained a well-formedness
error that the XML specification requires be detected and reported to the application), then the
task that is queued by the
networking task source in which the aforementioned problem is found must change the
text track readiness state to failed to
load and fire a simple event named error
at the track
element.
If the fetching algorithm does not fail, and the file was
successfully processed, then the final task that is queued by the networking task source, after it has
finished parsing the data, must change the text track readiness state to loaded, and fire a simple event named load
at the track
element.
If, while the fetching algorithm is active, either:
...then the user agent must abort the fetching algorithm,
discarding any pending tasks generated by that algorithm (and
in particular, not adding any cues to the text track list of cues after the moment
the URL changed), and then queue a task that first changes the text track
readiness state to failed to load and
then fires a simple event named error
at the track
element. This task must use the DOM manipulation task source.
Wait until the text track readiness state is no longer set to loading.
Wait until the track URL is no longer equal to URL, at the same time as the text track mode is set to or showing.
Jump to the step labeled top.
Whenever a track
element has its src
attribute
set, changed, or removed, the user agent must immediately empty the element's text
track's text track list of cues. (This also causes the algorithm above to stop
adding cues from the resource being obtained using the previously given URL, if any.)
How a specific format's text track cues are to be interpreted for the purposes of processing by an HTML user agent is defined by that format. In the absence of such a specification, this section provides some constraints within which implementations can attempt to consistently expose such formats.
To support the text track model of HTML, each unit of timed data is converted to a text track cue. Where the mapping of the format's features to the aspects of a text track cue as defined in this specification are not defined, implementations must ensure that the mapping is consistent with the definitions of the aspects of a text track cue as defined above, as well as with the following constraints:
Should be set to the empty string if the format has no obvious analogue to a per-cue identifier.
Should be set to false.
For media-resource-specific text tracks
of kind metadata
,
text track cues are exposed using the DataCue
object
unless there is a more appropriate TextTrackCue
interface available.
For example, if the media-resource-specific text track format is [[WebVTT]],
then VTTCue
is more appropriate.
interface TextTrackList : EventTarget { readonly attribute unsigned long length; getter TextTrack (unsigned long index); TextTrack? getTrackById(DOMString id); attribute EventHandler onchange; attribute EventHandler onaddtrack; attribute EventHandler onremovetrack; };
textTracks
. length
Returns the number of text tracks associated with the media element (e.g. from track
elements). This is the number of text tracks in the media element's list of text tracks.
textTracks[
n ]
Returns the TextTrack
object representing the nth text track in the media element's list of text tracks.
textTracks
. getTrackById
( id )Returns the TextTrack
object with the given identifier, or null if no track has that identifier.
A TextTrackList
object represents a dynamically updating list of text tracks in a given order.
The textTracks
attribute of media elements must return a TextTrackList
object
representing the TextTrack
objects of the text tracks
in the media element's list of text tracks, in the same order as in the
list of text tracks.
The length
attribute of a
TextTrackList
object must return the number of text
tracks in the list represented by the TextTrackList
object.
The supported property indices of a TextTrackList
object at any
instant are the numbers from zero to the number of text tracks in
the list represented by the TextTrackList
object minus one, if any. If there are no
text tracks in the list, there are no supported property
indices.
To determine the value of an indexed property of a TextTrackList
object for a given index index, the user agent must return the indexth text track in the list represented by the
TextTrackList
object.
The getTrackById(id)
method must return the first TextTrack
in the
TextTrackList
object whose id
IDL attribute
would return a value equal to the value of the id argument. When no tracks
match the given argument, the method must return null.
enum TextTrackMode { "disabled", "hidden", "showing" }; enum TextTrackKind { "subtitles", "captions", "descriptions", "chapters", "metadata" }; interface TextTrack : EventTarget { readonly attribute TextTrackKind kind; readonly attribute DOMString label; readonly attribute DOMString language; readonly attribute DOMString id; readonly attribute DOMString inBandMetadataTrackDispatchType; attribute TextTrackMode mode; readonly attribute TextTrackCueList? cues; readonly attribute TextTrackCueList? activeCues; void addCue(TextTrackCue cue); void removeCue(TextTrackCue cue); attribute EventHandler oncuechange; };
addTextTrack
( kind [, label [, language ] ] )Creates and returns a new TextTrack
object, which is also added to the
media element's list of text tracks.
kind
Returns the text track kind string.
label
Returns the text track label, if there is one, or the empty string otherwise (indicating that a custom label probably needs to be generated from the other attributes of the object if the object is exposed to the user).
language
Returns the text track language string.
id
Returns the ID of the given track.
For in-band tracks, this is the ID that can be used with a fragment identifier if the format
supports the Media Fragments URI syntax, and that can be used with the getTrackById()
method. [[!MEDIAFRAG]]
For TextTrack
objects corresponding to track
elements, this is the
ID of the track
element.
inBandMetadataTrackDispatchType
Returns the text track in-band metadata track dispatch type string.
mode
[ = value ]Returns the text track mode, represented by a string from the following list:
disabled
"The text track disabled mode.
The
mode.showing
"The text track showing mode.
Can be set, to change the mode.
cues
Returns the text track list of cues, as a TextTrackCueList
object.
activeCues
Returns the text track cues from the text track
list of cues that are currently active (i.e. that start before the current playback
position and end after it), as a TextTrackCueList
object.
addCue
( cue )Adds the given cue to textTrack's text track list of cues.
removeCue
( cue )Removes the given cue from textTrack's text track list of cues.
The addTextTrack(kind, label, language)
method of media elements, when invoked, must run the following steps:
Create a new TextTrack
object.
Create a new text track corresponding to the new object, and set its text track kind to kind, its text track label to label, its text track language to language, its text track readiness state to the text track loaded state, its text track mode to the mode, and its text track list of cues to an empty list.
Initially, the text track list of cues is not associated with any rules for updating the text track rendering. When a text track cue is added to it, the text track list of cues has its rules permanently set accordingly.
Add the new text track to the media element's list of text tracks.
Queue a task to fire a trusted event with the name addtrack
, that does not bubble and is not cancelable, and
that uses the TrackEvent
interface, with the track
attribute initialised to the new text
track's TextTrack
object, at the media element's textTracks
attribute's TextTrackList
object.
Return the new TextTrack
object.
The kind
attribute must return the
text track kind of the text track that the TextTrack
object
represents.
The label
attribute must return the
text track label of the text track that the TextTrack
object represents.
The language
attribute must return the
text track language of the text track that the TextTrack
object represents.
The id
attribute returns the track's
identifier, if it has one, or the empty string otherwise. For tracks that correspond to
track
elements, the track's identifier is the value of the element's id
attribute, if any. For in-band tracks, the track's identifier is
specified by the media resource. If the media resource is in a format
that supports the Media Fragments URI fragment identifier syntax, the identifier
returned for a particular track must be the same identifier that would enable the track if used as
the name of a track in the track dimension of such a fragment identifier. [[!MEDIAFRAG]]
The inBandMetadataTrackDispatchType
attribute must return the text track in-band metadata track dispatch type of the
text track that the TextTrack
object represents.
The mode
attribute, on getting, must return
the string corresponding to the text track mode of the text track that
the TextTrack
object represents, as defined by the following list:
disabled
"hidden
"showing
"On setting, if the new value isn't equal to what the attribute would currently return, the new value must be processed as follows:
disabled
"Set the text track mode of the text track that the
TextTrack
object represents to the text track disabled mode.
Set the text track mode of the text track that the
TextTrack
object represents to the mode.
showing
"Set the text track mode of the text track that the
TextTrack
object represents to the text track showing mode.
If the text track mode of the text track that the
TextTrack
object represents is not the text track disabled mode, then
the cues
attribute must return a
live TextTrackCueList
object that represents the subset of the
text track list of cues of the text track that the
TextTrack
object represents whose end
times occur at or after the earliest possible position when the script
started, in text track cue order. Otherwise, it must return null. For each TextTrack
object, when an
object is returned, the same TextTrackCueList
object must be returned each time.
The earliest possible position when the script started is whatever the earliest possible position was the last time the event loop reached step 1.
If the text track mode of the text track that the
TextTrack
object represents is not the text track disabled mode, then
the activeCues
attribute must return a
live TextTrackCueList
object that represents the subset of the
text track list of cues of the text track that the
TextTrack
object represents whose active flag was set when the script
started, in text track cue order. Otherwise, it must return null. For each TextTrack
object, when an
object is returned, the same TextTrackCueList
object must be returned each time.
A text track cue's active flag was set when the script started if its text track cue active flag was set the last time the event loop reached step 1.
The addCue(cue)
method
of TextTrack
objects, when invoked, must run the following steps:
If the text track list of cues does not yet have any associated rules for updating the text track rendering, then associate the text track list of cues with the rules for updating the text track rendering appropriate to cue.
If text track list of cues' associated rules for updating the text
track rendering are not the same rules for updating the text track rendering
as appropriate for cue, then throw an InvalidStateError
exception and abort these steps.
If the given cue is in a text track list of cues, then remove cue from that text track list of cues.
Add cue to the method's TextTrack
object's text
track's text track list of cues.
The removeCue(cue)
method of TextTrack
objects, when invoked, must run the following steps:
If the given cue is not currently listed in the method's
TextTrack
object's text track's text track list of cues,
then throw a NotFoundError
exception and abort these steps.
Remove cue from the method's TextTrack
object's
text track's text track list of cues.
In this example, an audio
element is used to play a specific sound-effect from a
sound file containing many sound effects. A cue is used to pause the audio, so that it ends
exactly at the end of the clip, even if the browser is busy running some script. If the page had
relied on script to pause the audio, then the start of the next clip might be heard if the
browser was not able to run the script at the exact time specified.
var sfx = new Audio('sfx.wav'); var sounds = sfx.addTextTrack('metadata'); // add sounds we care about function addFX(start, end, name) { var cue = new VTTCue(start, end, ''); cue.id = name; cue.pauseOnExit = true; sounds.addCue(cue); } addFX(12.783, 13.612, 'dog bark'); addFX(13.612, 15.091, 'kitten mew')) function playSound(id) { sfx.currentTime = sounds.getCueById(id).startTime; sfx.play(); } // play a bark as soon as we can sfx.oncanplaythrough = function () { playSound('dog bark'); } // meow when the user tries to leave window.onbeforeunload = function () { playSound('kitten mew'); return 'Are you sure you want to leave this awesome page?'; }
interface TextTrackCueList { readonly attribute unsigned long length; getter TextTrackCue (unsigned long index); TextTrackCue? getCueById(DOMString id); };
length
Returns the number of cues in the list.
Returns the text track cue with index index in the list. The cues are sorted in text track cue order.
getCueById
( id )Returns the first text track cue (in text track cue order) with text track cue identifier id.
Returns null if none of the cues have the given identifier or if the argument is the empty string.
A TextTrackCueList
object represents a dynamically updating list of text track cues in a given order.
The length
attribute must return
the number of cues in the list represented by the
TextTrackCueList
object.
The supported property indices of a TextTrackCueList
object at any
instant are the numbers from zero to the number of cues in the
list represented by the TextTrackCueList
object minus one, if any. If there are no
cues in the list, there are no supported property
indices.
To determine the value of an indexed property for a given index index, the user agent must return the indexth text track
cue in the list represented by the TextTrackCueList
object.
The getCueById(id)
method, when called with an argument other than the empty string,
must return the first text track cue in the list represented by the
TextTrackCueList
object whose text track cue identifier is id, if any, or null otherwise. If the argument is the empty string, then the method
must return null.
interface TextTrackCue : EventTarget { readonly attribute TextTrack? track; attribute DOMString id; attribute double startTime; attribute double endTime; attribute boolean pauseOnExit; attribute EventHandler onenter; attribute EventHandler onexit; };
Returns the TextTrack
object to which this
text track cue belongs, if any, or null
otherwise.
Returns the text track cue identifier.
Can be set.
Returns the text track cue start time, in seconds.
Can be set.
Returns the text track cue end time, in seconds.
Can be set.
Returns true if the text track cue pause-on-exit flag is set, false otherwise.
Can be set.
The track
attribute, on getting, must
return the TextTrack
object of the text track in whose list of cues the text track cue that the
TextTrackCue
object represents finds itself, if any; or null otherwise.
The id
attribute, on getting, must return
the text track cue identifier of the text track cue that the
TextTrackCue
object represents. On setting, the text track cue
identifier must be set to the new value.
The startTime
attribute, on
getting, must return the text track cue start time of the text track cue
that the TextTrackCue
object represents, in seconds. On setting, the text track
cue start time must be set to the new value, interpreted in seconds; then, if the
TextTrackCue
object's text track cue is in a text track's
list of cues, and that text track is in
a media element's list of text tracks, and the media
element's show poster flag is not set, then run the time marches on steps for that media element.
The endTime
attribute, on getting,
must return the text track cue end time of the text track cue that the
TextTrackCue
object represents, in seconds. On setting, the text track cue end
time must be set to the new value, interpreted in seconds; then, if the
TextTrackCue
object's text track cue is in a text track's
list of cues, and that text track is in
a media element's list of text tracks, and the media
element's show poster flag is not set, then run the time marches on steps for that media element.
The pauseOnExit
attribute, on
getting, must return true if the text track cue pause-on-exit flag of the text
track cue that the TextTrackCue
object represents is set; or false otherwise.
On setting, the text track cue pause-on-exit flag must be set if the new value is
true, and must be unset otherwise.
Media resources often contain one or more media-resource-specific text tracks containing data that browsers don't render, but want to expose to script to allow being dealt with.
If the browser is unable to identify a TextTrackCue
interface that is more
appropriate to expose the data in the cues of a media-resource-specific text track,
the DataCue object is used. [[INBANDTRACKS]]
[Constructor(double startTime, double endTime, ArrayBuffer data)]
interface DataCue : TextTrackCue {
attribute ArrayBuffer data;
};
DataCue
( [ startTime, endTime, data ] )Returns a new DataCue
object, for use with the addCue()
method.
The startTime argument sets the text track cue start time.
The endTime argument sets the text track cue end time.
The data argument is copied as the text track cue data.
Returns the text track cue data in raw unparsed form.
Can be set.
The data
attribute, on getting, must
return the raw text track cue data of the text track cue that the
TextTrackCue
object represents. On setting, the text track cue data must
be set to the new value.
The UA will use DataCue to expose only text track cue objects that belong to a text track that has a text track kind of metadata.
DataCue has a constructor to allow script to create DataCue objects in cases where generic metadata needs to be managed for a text track.
The rules for updating the text track rendering for a DataCue simply
state that there is no rendering, even when the cues are in showing mode and the text track kind is one of
subtitles
or
captions
or
descriptions
or
chapters
.
Chapters are segments of a media resource with a given title. Chapters can be nested, in the same way that sections in a document outline can have subsections.
Each text track cue in a text track being used for describing chapters has three key features: the text track cue start time, giving the start time of the chapter, the text track cue end time, giving the end time of the chapter, and the text track rules for extracting the chapter title.
The rules for constructing the chapter tree from a text track are as follows. They produce a potentially nested list of chapters, each of which have a start time, end time, title, and a list of nested chapters. This algorithm discards cues that do not correctly nest within each other, or that are out of order.
Let list be a copy of the list of cues of the text track being processed.
Remove from list any text track cue whose text track cue end time is before its text track cue start time.
Let output be an empty list of chapters, where a chapter is a record consisting of a start time, an end time, a title, and a (potentially empty) list of nested chapters. For the purpose of this algorithm, each chapter also has a parent chapter.
Let current chapter be a stand-in chapter whose start time is negative infinity, whose end time is positive infinity, and whose list of nested chapters is output. (This is just used to make the algorithm easier to describe.)
Loop: If list is empty, jump to the step labeled end.
Let current cue be the first cue in list, and then remove it from list.
If current cue's text track cue start time is less than the start time of current chapter, then return to the step labeled loop.
While current cue's text track cue start time is greater than or equal to current chapter's end time, let current chapter be current chapter's parent chapter.
If current cue's text track cue end time is greater than the end time of current chapter, then return to the step labeled loop.
Create a new chapter new chapter, whose start time is current cue's text track cue start time, whose end time is current cue's text track cue end time, whose title is the result of running current cue's text track rules for extracting the chapter title, and whose list of nested chapters is empty.
Append new chapter to current chapter's list of nested chapters, and let current chapter be new chapter's parent.
Let current chapter be new chapter.
Return to the step labeled loop.
End: Return output.
The following snippet of a WebVTT file shows how nested chapters can be marked up. The file describes three 50-minute chapters, "Astrophysics", "Computational Physics", and "General Relativity". The first has three subchapters, the second has four, and the third has two. [[!WEBVTT]]
WEBVTT 00:00:00.000 --> 00:50:00.000 Astrophysics 00:00:00.000 --> 00:10:00.000 Introduction to Astrophysics 00:10:00.000 --> 00:45:00.000 The Solar System 00:00:00.000 --> 00:10:00.000 Coursework Description 00:50:00.000 --> 01:40:00.000 Computational Physics 00:50:00.000 --> 00:55:00.000 Introduction to Programming 00:55:00.000 --> 01:30:00.000 Data Structures 01:30:00.000 --> 01:35:00.000 Answers to Last Exam 01:35:00.000 --> 01:40:00.000 Coursework Description 01:40:00.000 --> 02:30:00.000 General Relativity 01:40:00.000 --> 02:00:00.000 Tensor Algebra 02:00:00.000 --> 02:30:00.000 The General Relativistic Field Equations
The following are the event handlers that (and their corresponding event handler event types) must be supported, as event handler IDL
attributes, by all objects implementing the TextTrackList
interface:
Event handler | Event handler event type |
---|---|
onchange | change
|
onaddtrack | addtrack
|
onremovetrack | removetrack
|
The following are the event handlers that (and their corresponding event handler event types) must be supported, as event handler IDL
attributes, by all objects implementing the TextTrack
interface:
Event handler | Event handler event type |
---|---|
oncuechange | cuechange
|
The following are the event handlers that (and their corresponding event handler event types) must be supported, as event handler IDL
attributes, by all objects implementing the TextTrackCue
interface:
Event handler | Event handler event type |
---|---|
onenter | enter
|
onexit | exit
|
This section is non-normative.
Text tracks can be used for storing data relating to the media data, for interactive or augmented views.
For example, a page showing a sports broadcast could include information about the current score. Suppose a robotics competition was being streamed live. The image could be overlayed with the scores, as follows:
In order to make the score display render correctly whenever the user seeks to an arbitrary point in the video, the metadata text track cues need to be as long as is appropriate for the score. For example, in the frame above, there would be maybe one cue that lasts the length of the match that gives the match number, one cue that lasts until the blue alliance's score changes, and one cue that lasts until the red alliance's score changes. If the video is just a stream of the live event, the time in the bottom right would presumably be automatically derived from the current video time, rather than based on a cue. However, if the video was just the highlights, then that might be given in cues also.
The following shows what fragments of this could look like in a WebVTT file:
WEBVTT ... 05:10:00.000 --> 05:12:15.000 matchtype:qual matchnumber:37 ... 05:11:02.251 --> 05:11:17.198 red:78 05:11:03.672 --> 05:11:54.198 blue:66 05:11:17.198 --> 05:11:25.912 red:80 05:11:25.912 --> 05:11:26.522 red:83 05:11:26.522 --> 05:11:26.982 red:86 05:11:26.982 --> 05:11:27.499 red:89 ...
The key here is to notice that the information is given in cues that span the length of time to which the relevant event applies. If, instead, the scores were given as zero-length (or very brief, nearly zero-length) cues when the score changes, for example saying "red+2" at 05:11:17.198, "red+3" at 05:11:25.912, etc, problems arise: primarily, seeking is much harder to implement, as the script has to walk the entire list of cues to make sure that no notifications have been missed; but also, if the cues are short it's possible the script will never see that they are active unless it listens to them specifically.
When using cues in this manner, authors are encouraged to use the cuechange
event to update the current annotations. (In
particular, using the timeupdate
event would be less
appropriate as it would require doing work even when the cues haven't changed, and, more
importantly, would introduce a higher latency between when the metatata cues become active and
when the display is updated, since timeupdate
events
are rate-limited.)
The controls
attribute is a boolean
attribute. If present, it indicates that the author has not provided a scripted controller
and would like the user agent to provide its own set of controls.
If the attribute is present, or if scripting is disabled for the media element, then the user agent should expose a user interface to the user. This user interface should include features to begin playback, pause playback, seek to an arbitrary position in the content (if the content supports arbitrary seeking), change the volume, change the display of closed captions or embedded sign-language tracks, select different audio tracks or turn on audio descriptions, and show the media content in manners more suitable to the user (e.g. full-screen video or in an independent resizable window). Other controls may also be made available.
If the media element has a current media controller, then the user
agent should expose audio tracks from all the slaved media elements (although
avoiding duplicates if the same media resource is being used several times). If a
media resource's audio track exposed in this way has no known name, and it is the
only audio track for a particular media element, the user agent should use the
element's title
attribute, if any, as the name (or as part of the
name) of that track.
Even when the attribute is absent, however, user agents may provide controls to affect playback
of the media resource (e.g. play, pause, seeking, and volume controls), but such features should
not interfere with the page's normal rendering. For example, such features could be exposed in the
media element's context menu. The user agent may implement this simply by exposing a user interface to the user as
described above (as if the controls
attribute was
present).
If the user agent exposes a user interface to
the user by displaying controls over the media element, then the user agent
should suppress any user interaction events while the user agent is interacting with this
interface. (For example, if the user clicks on a video's playback control, mousedown
events and so forth would not simultaneously be fired at
elements on the page.)
Where possible (specifically, for starting, stopping, pausing, and unpausing playback, for seeking, for changing the rate of playback, for fast-forwarding or rewinding, for listing, enabling, and disabling text tracks, and for muting or changing the volume of the audio), user interface features exposed by the user agent must be implemented in terms of the DOM API described above, so that, e.g., all the same events fire.
When a media element has a current media controller, the user agent's
user interface for pausing and unpausing playback, for seeking, for changing the rate of playback,
for fast-forwarding or rewinding, and for muting or changing the volume of audio of the entire
group must be implemented in terms of the MediaController
API exposed on that
current media controller. When a media element has a current media
controller, and all the slaved media elements of that
MediaController
are paused, the user agent should also unpause all the slaved
media elements when the user invokes a user agent interface control for beginning
playback.
The "play" function in the user agent's interface must set the playbackRate
attribute to the value of the defaultPlaybackRate
attribute before invoking the play()
method. When a media element has a current media controller, the
attributes and method with those names on that MediaController
object must be used.
Otherwise, the attributes and method with those names on the media element itself
must be used.
Features such as fast-forward or rewind must be implemented by only changing the playbackRate
attribute (and not the defaultPlaybackRate
attribute). Again, when a media element has a current media controller,
the attributes with those names on that MediaController
object must be used;
otherwise, the attributes with those names on the media element itself must be used.
When a media element has a current media controller, seeking must be
implemented in terms of the currentTime
attribute on that MediaController
object. Otherwise, the user agent must directly
seek to the requested position in the media
element's media timeline. For media resources where seeking to an arbitrary
position would be slow, user agents are encouraged to use the approximate-for-speed flag
when seeking in response to the user manipulating an approximate position interface such as a seek
bar.
When a media element has a current media controller, user agents may
additionally provide the user with controls that directly manipulate an individual media
element without affecting the MediaController
, but such features are
considered relatively advanced and unlikely to be useful to most users.
The activation behaviour of a media element that is exposing a user interface to the user must be to run the following steps:
If the media element has a current media controller, and that
current media controller is a restrained media controller, then invoke
the play()
method of the
MediaController
and abort these steps.
If the media element has a current media controller,
and that current media controller is a paused media controller, all
of the MediaController
's slaved media elements have ended
playback, and the media controller playback rate is positive or zero, then
seek the media controller to zero.
If the media element has a current media controller,
and that current media controller is a paused media controller, then
invoke the unpause()
method of the
MediaController
and abort these steps.
If the media element has a current media controller,
then that current media controller is a playing media controller;
invoke the pause()
method of the
MediaController
and abort these steps.
If
the media element's paused
attribute is true,
then invoke the play()
method on the media
element and abort these steps.
Invoke the pause()
method on the media
element.
For the purposes of listing chapters in the media resource, only text tracks in the media element's list of text tracks
that are showing and whose text track kind is
chapters
should be used. Such tracks must be
interpreted according to the rules for constructing the chapter tree from a text
track. When seeking in response to a user maniplating a chapter selection interface, user
agents should not use the approximate-for-speed flag.
The controls
IDL attribute must
reflect the content attribute of the same name.
volume
[ = value ]Returns the current playback volume, as a number in the range 0.0 to 1.0, where 0.0 is the quietest and 1.0 the loudest.
Can be set, to change the volume.
Throws an IndexSizeError
exception if the new value is not in the range 0.0 .. 1.0.
muted
[ = value ]Returns true if audio is muted, overriding the volume
attribute, and false if the volume
attribute is being
honored.
Can be set, to change whether the audio is muted or not.
A media element has a playback volume, which is a fraction in the range 0.0 (silent) to 1.0 (loudest). Initially, the volume should be 1.0, but user agents may remember the last set value across sessions, on a per-site basis or otherwise, so the volume may start at other values.
The volume
IDL attribute must return the
playback volume of any audio portions of the
media element. On setting, if the new value is in the range 0.0 to 1.0 inclusive, the
media element's playback volume must be
set to the new value. If the new value is outside the range 0.0 to 1.0 inclusive, then, on
setting, an IndexSizeError
exception must be thrown instead.
A media element can also be muted. If anything is muting the element, then it is muted. (For example, when the direction of playback is backwards, the element is muted.)
The muted
IDL attribute must return the value
to which it was last set. When a media element is created, if the element has a muted
content attribute specified, then the muted
IDL attribute should be set to true; otherwise, the user
agents may set the value to the user's preferred value (e.g. remembering the last set value across
sessions, on a per-site basis or otherwise). While the muted
IDL attribute is set to true, the media element must be muted.
Whenever either of the values that would be returned by the volume
and muted
IDL
attributes change, the user agent must queue a task to fire a simple
event named volumechange
at the media
element.
An element's effective media volume is determined as follows:
If the user has indicated that the user agent is to override the volume of the element, then the element's effective media volume is the volume desired by the user. Abort these steps.
If the element's audio output is muted, the element's effective media volume is zero. Abort these steps.
If the element has a current media controller and that
MediaController
object's media controller mute override is true, the
element's effective media volume is zero. Abort these steps.
Let volume be the playback volume of the audio portions of the media element, in range 0.0 (silent) to 1.0 (loudest).
If the element has a current media controller, multiply volume by that MediaController
object's media controller
volume multiplier. (The media controller volume multiplier is in the range
0.0 to 1.0, so this can only reduce the value.)
The element's effective media volume is volume, interpreted relative to the range 0.0 to 1.0, with 0.0 being silent, and 1.0 being the loudest setting, values in between increasing in loudness. The range need not be linear. The loudest setting may be lower than the system's loudest possible setting; for example the user could have set a maximum volume.
The muted
content attribute on media elements is a boolean attribute that controls the
default state of the audio output of the media resource, potentially overriding user
preferences.
The defaultMuted
IDL attribute must
reflect the muted
content attribute.
This attribute has no dynamic effect (it only controls the default state of the element).
This video (an advertisement) autoplays, but to avoid annoying users, it does so without sound, and allows the user to turn the sound on.
<video src="adverts.cgi?kind=video" controls autoplay loop muted></video>
Objects implementing the TimeRanges
interface
represent a list of ranges (periods) of time.
interface TimeRanges { readonly attribute unsigned long length; double start(unsigned long index); double end(unsigned long index); };
length
Returns the number of ranges in the object.
start
(index)Returns the time for the start of the range with the given index.
Throws an IndexSizeError
exception if the index is out of range.
end
(index)Returns the time for the end of the range with the given index.
Throws an IndexSizeError
exception if the index is out of range.
The length
IDL attribute must return the
number of ranges represented by the object.
The start(index)
method must return the position of the start of the indexth range represented
by the object, in seconds measured from the start of the timeline that the object covers.
The end(index)
method
must return the position of the end of the indexth range represented by the
object, in seconds measured from the start of the timeline that the object covers.
These methods must throw IndexSizeError
exceptions if called with an index argument greater than or equal to the number of ranges represented by the
object.
When a TimeRanges
object is said to be a normalised TimeRanges
object, the ranges it represents must obey the following criteria:
In other words, the ranges in such an object are ordered, don't overlap, and don't touch (adjacent ranges are folded into one bigger range). A range can be empty (referencing just a single moment in time), e.g. to indicate that only one frame is currently buffered in the case that the user agent has discarded the entire media resource except for the current frame, when a media element is paused.
Ranges in a TimeRanges
object must be inclusive.
Thus, the end of a range would be equal to the start of a following adjacent (touching but not overlapping) range. Similarly, a range covering a whole timeline anchored at zero would have a start equal to zero and an end equal to the duration of the timeline.
The timelines used by the objects returned by the buffered
, seekable
and
played
IDL attributes of media
elements must be that element's media timeline.
TrackEvent
interface[Constructor(DOMString type, optional TrackEventInit eventInitDict)] interface TrackEvent : Event { readonly attribute (VideoTrack or AudioTrack or TextTrack)? track; }; dictionary TrackEventInit : EventInit { (VideoTrack or AudioTrack or TextTrack)? track; };
track
Returns the track object (TextTrack
, AudioTrack
, or
VideoTrack
) to which the event relates.
The track
attribute must return the value
it was initialised to. When the object is created, this attribute must be initialised to null. It
represents the context information for the event.
This section is non-normative.
The following events fire on media elements as part of the processing model described above:
Event name | Interface | Fired when... | Preconditions |
---|---|---|---|
loadstart
| Event
| The user agent begins looking for media data, as part of the resource selection algorithm. | networkState equals NETWORK_LOADING
|
progress
| Event
| The user agent is fetching media data. | networkState equals NETWORK_LOADING
|
suspend
| Event
| The user agent is intentionally not currently fetching media data. | networkState equals NETWORK_IDLE
|
abort
| Event
| The user agent stops fetching the media data before it is completely downloaded, but not due to an error. | error is an object with the code MEDIA_ERR_ABORTED . networkState equals either NETWORK_EMPTY or NETWORK_IDLE , depending on when the download was aborted.
|
error
| Event
| An error occurs while fetching the media data. | error is an object with the code MEDIA_ERR_NETWORK or higher. networkState equals either NETWORK_EMPTY or NETWORK_IDLE , depending on when the download was aborted.
|
emptied
| Event
| A media element whose networkState
was previously not in the NETWORK_EMPTY state has
just switched to that state (either because of a fatal error during load that's about to be
reported, or because the load() method was invoked while
the resource selection algorithm was already
running).
| networkState is NETWORK_EMPTY ; all the IDL attributes are in their
initial states.
|
stalled
| Event
| The user agent is trying to fetch media data, but data is unexpectedly not forthcoming. | networkState is NETWORK_LOADING .
|
loadedmetadata
| Event
| The user agent has just determined the duration and dimensions of the media resource and the text tracks are ready. | readyState is newly equal to HAVE_METADATA or greater for the first time.
|
loadeddata
| Event
| The user agent can render the media data at the current playback position for the first time. | readyState newly increased to HAVE_CURRENT_DATA or greater for the first time.
|
canplay
| Event
| The user agent can resume playback of the media data, but estimates that if playback were to be started now, the media resource could not be rendered at the current playback rate up to its end without having to stop for further buffering of content. | readyState newly increased to HAVE_FUTURE_DATA or greater.
|
canplaythrough
| Event
| The user agent estimates that if playback were to be started now, the media resource could be rendered at the current playback rate all the way to its end without having to stop for further buffering. | readyState is newly equal to HAVE_ENOUGH_DATA .
|
playing
| Event
| Playback is ready to start after having been paused or delayed due to lack of media data. | readyState is newly equal to or greater than
HAVE_FUTURE_DATA and paused is false, or paused is newly false and readyState is equal to or greater than HAVE_FUTURE_DATA . Even if this event fires, the
element might still not be potentially playing, e.g. if the element is
blocked on its media controller (e.g. because the current media
controller is paused, or another slaved media
element is stalled somehow, or because the media resource has no data
corresponding to the media controller position), or the element is paused
for user interaction or paused for in-band content.
|
waiting
| Event
| Playback has stopped because the next frame is not available, but the user agent expects that frame to become available in due course. | readyState is equal to or less than HAVE_CURRENT_DATA , and paused is false. Either seeking is true, or the current playback position
is not contained in any of the ranges in buffered . It
is possible for playback to stop for other reasons without paused being false, but those reasons do not fire this event
(and when those situations resolve, a separate playing
event is not fired either): e.g. the element is newly blocked on its media
controller, or playback ended, or playback
stopped due to errors, or the element has paused for user interaction
or paused for in-band content.
|
seeking
| Event
| The seeking IDL attribute changed to true, and the user agent has started seeking to a new position.
| |
seeked
| Event
| The seeking IDL attribute changed to false after the current playback position was changed.
| |
ended
| Event
| Playback has stopped because the end of the media resource was reached. | currentTime equals the end of the media
resource; ended is true.
|
durationchange
| Event
| The duration attribute has just been updated.
| |
timeupdate
| Event
| The current playback position changed as part of normal playback or in an especially interesting way, for example discontinuously. | |
play
| Event
| The element is no longer paused. Fired after the play()
method has returned, or when the autoplay attribute
has caused playback to begin.
| paused is newly false.
|
pause
| Event
| The element has been paused. Fired after the pause()
method has returned.
| paused is newly true.
|
ratechange
| Event
| Either the defaultPlaybackRate or the
playbackRate attribute has just been updated.
| |
resize
| Event
| One or both of the videoWidth and videoHeight attributes have just been updated.
| Media element is a video element; readyState is not HAVE_NOTHING
|
volumechange
| Event
| Either the volume attribute or the muted attribute has changed. Fired after the relevant
attribute's setter has returned.
|
The following events fire on MediaController
objects:
Event name | Interface | Fired when... |
---|---|---|
emptied
| Event
| All the slaved media elements newly have readyState set to HAVE_NOTHING or greater, or there are no longer any
slaved media elements.
|
loadedmetadata
| Event
| All the slaved media elements newly have readyState set to HAVE_METADATA or greater.
|
loadeddata
| Event
| All the slaved media elements newly have readyState set to HAVE_CURRENT_DATA or greater.
|
canplay
| Event
| All the slaved media elements newly have readyState set to HAVE_FUTURE_DATA or greater.
|
canplaythrough
| Event
| All the slaved media elements newly have readyState set to HAVE_ENOUGH_DATA .
|
playing
| Event
| The MediaController is no longer a blocked media controller.
|
waiting
| Event
| The MediaController is now a blocked media controller.
|
ended
| Event
| All the slaved media elements have newly ended playback; the
MediaController has reached the end of all the slaved media elements.
|
durationchange
| Event
| The duration attribute has just been
updated.
|
timeupdate
| Event
| The media controller position changed. |
play
| Event
| The paused attribute is newly false.
|
pause
| Event
| The paused attribute is newly true.
|
ratechange
| Event
| Either the defaultPlaybackRate
attribute or the playbackRate attribute
has just been updated.
|
volumechange
| Event
| Either the volume attribute or the muted attribute has just been updated.
|
The following events fire on AudioTrackList
, VideoTrackList
, and
TextTrackList
objects:
Event name | Interface | Fired when... |
---|---|---|
change
| Event
| One or more tracks in the track list have been enabled or disabled. |
addtrack
| TrackEvent
| A track has been added to the track list. |
removetrack
| TrackEvent
| A track has been removed from the track list. |
The following event fires on TextTrack
objects and track
elements:
Event name | Interface | Fired when... |
---|---|---|
cuechange
| Event
| One or more cues in the track have become active or stopped being active. |
The following events fire on TextTrackCue
objects:
Event name | Interface | Fired when... |
---|---|---|
enter
| Event
| The cue has become active. |
exit
| Event
| The cue has stopped being active. |
The main security and privacy implications of the video
and audio
elements come from the ability to embed media cross-origin. There are two directions that threats
can flow: from hostile content to a victim page, and from a hostile page to victim content.
If a victim page embeds hostile content, the threat is that the content might contain scripted
code that attempts to interact with the Document
that embeds the content. To avoid
this, user agents must ensure that there is no access from the content to the embedding page. In
the case of media content that uses DOM concepts, the embedded content must be treated as if it
was in its own unrelated top-level browsing context.
For instance, if an SVG animation was embedded in a video
element,
the user agent would not give it access to the DOM of the outer page. From the perspective of
scripts in the SVG resource, the SVG file would appear to be in a lone top-level browsing context
with no parent.
If a hostile page embeds victim content, the threat is that the embedding page could obtain
information from the content that it would not otherwise have access to. The API does expose some
information: the existence of the media, its type, its duration, its size, and the performance
characteristics of its host. Such information is already potentially problematic, but in practice
the same information can more or less be obtained using the img
element, and so it
has been deemed acceptable.
However, significantly more sensitive information could be obtained if the user agent further
exposes metadata within the content such as subtitles or chapter titles. Such information is
therefore only exposed if the video resource passes a CORS resource sharing check.
The crossorigin
attribute allows authors to control
how this check is performed. [[!FETCH]]
Without this restriction, an attacker could trick a user running within a corporate network into visiting a site that attempts to load a video from a previously leaked location on the corporation's intranet. If such a video included confidential plans for a new product, then being able to read the subtitles would present a serious confidentiality breach.
This section is non-normative.
Playing audio and video resources on small devices such as set-top boxes or mobile phones is
often constrained by limited hardware resources in the device. For example, a device might only
support three simultaneous videos. For this reason, it is a good practice to release resources
held by media elements when they are done playing, either by
being very careful about removing all references to the element and allowing it to be garbage
collected, or, even better, by removing the element's src
attribute and any source
element descendants, and invoking the element's load()
method.
Similarly, when the playback rate is not exactly 1.0, hardware, software, or format limitations can cause video frames to be dropped and audio to be choppy or muted.
This section is non-normative.
How accurately various aspects of the media element API are implemented is considered a quality-of-implementation issue.
For example, when implementing the buffered
attribute,
how precise an implementation reports the ranges that have been buffered depends on how carefully
the user agent inspects the data. Since the API reports ranges as times, but the data is obtained
in byte streams, a user agent receiving a variable-bit-rate stream might only be able to determine
precise times by actually decoding all of the data. User agents aren't required to do this,
however; they can instead return estimates (e.g. based on the average bit rate seen so far) which
get revised as more information becomes available.
As a general rule, user agents are urged to be conservative rather than optimistic. For example, it would be bad to report that everything had been buffered when it had not.
Another quality-of-implementation issue would be playing a video backwards when the codec is designed only for forward playback (e.g. there aren't many key frames, and they are far apart, and the intervening frames only have deltas from the previous frame). User agents could do a poor job, e.g. only showing key frames; however, better implementations would do more work and thus do a better job, e.g. actually decoding parts of the video forwards, storing the complete frames, and then playing the frames backwards.
Similarly, while implementations are allowed to drop buffered data at any time (there is no requirement that a user agent keep all the media data obtained for the lifetime of the media element), it is again a quality of implementation issue: user agents with sufficient resources to keep all the data around are encouraged to do so, as this allows for a better user experience. For example, if the user is watching a live stream, a user agent could allow the user only to view the live video; however, a better user agent would buffer everything and allow the user to seek through the earlier material, pause it, play it forwards and backwards, etc.
When multiple tracks are synchronised with a MediaController
, it is possible for
scripts to add and remove media elements from the MediaController
's list of
slaved media elements, even while these tracks are playing. How smoothly the media
plays back in such situations is another quality-of-implementation issue.
When a media element that is paused is removed from a document and not reinserted before the next time the event loop reaches step 1, implementations that are resource constrained are encouraged to take that opportunity to release all hardware resources (like video planes, networking resources, and data buffers) used by the media element. (User agents still have to keep track of the playback position and so forth, though, in case playback is later restarted.)