WebVfx 0.1.6-6-g5144893-dirty
|
WebVfx supports both QML Effects Authoring and Web (HTML) Effects Authoring for developing video effects. Both have a lot in common. WebVfx loads the effect content (HTML or QML) and exposes a JavaScript context object named webvfx
to the effect implementation.
If the effect will need to access frames of video, it must set the webvfx.imageTypeMap
property to a map describing the names it will use for each video source.
Each name should be mapped to one of the enumerations:
webvfx.SourceImageType
Indicates the image name is the source image of a transition (the image being transitioned from), or the primary image of a filter. webvfx.TargetImageType
Indicates the image name is the target image of a transition (the image being transitioned to). webvfx.ExtraImageType
Indicates the image name is an extra asset. There can be multiple image names with this type.For example:
webvfx.imageTypeMap = { "sourceImage" : webvfx.SourceImageType, "targetImage" : webvfx.TargetImageType }
The effect can request additional named parameters as part of initialization by calling webvfx.getStringParameter(name)
or webvfx.getNumberParameter(name)
The effect must connect the webvfx.renderRequested(time)
signal. See QML Effects Authoring or Web (HTML) Effects Authoring for how to connect to this signal.
When the effect has fully loaded (including any external resources being loaded asynchronously), it should call:
webvfx.readyRender(true)
If the load failed for any reason, pass false
instead.
Now WebVfx will start rendering frames of video. It will pull the current frame from each of the video sources specified in webvfx.imageTypeMap
then invoke the webvfx.renderRequested(time)
signal. The time is a normalized time from 0 to 1.0 indicating the position in the transition or effect. The effect should then request any images it specified in webvfx.imageTypeMap
each time it handles renderRequested
. Images can be requested by calling webvfx.getImage(name)
where name
is the string image name specified in imageTypeMap
. See QML Effects Authoring or Web (HTML) Effects Authoring for how to use the returned image object.
The effect should configure itself using the time
value and the images it retrieved before returning from the renderRequested
slot.
Effects can be authored using QtQuick QML, a declarative UI language.
The webvfx.renderRequested(time)
signal can be handled using the QML Connections element with webvfx
as the target. The time
parameter is available to the code, e.g.:
Connections { target: webvfx onRenderRequested: { effect.textureImage = webvfx.getImage("sourceImage"); console.log("render: " + time); } }
Video frame images retrieved via webvfx.getImage(name)
are QImage objects. These can be assigned directly to some QML properties, e.g. Effect.textureImage. Other QML properties require an image URL - this can be retrived via webvfx.getImageUrl(name)
. It is more efficient to use the image directly when possible, instead of the URL.
QML is more interesting as a video effects technology when it is extended with 3D support - see 3D Effects Authoring.
Effects can be authored using QtWebKit HTML.
The webvfx.renderRequested(time)
signal can be handled by connecting it to a JavaScript function that takes a time
parameter, using webvfx.renderRequested.connect
:
function render(time) {
console.log("render: " + time);
}
webvfx.renderRequested.connect(render);
webvfx.getImage(name)
returns a JavaScript image proxy object for the current frame of video for the named image. This must be assigned to a DOM Image
element so that it can be used in the HTML. The QtWebKit Bridge provides a method assignToHTMLImageElement()
to do this. You can assign to a new Image:
var image = new Image()
webvfx.getImage("sourceImage").assignToHTMLImageElement(image);
or reference an existing one in the DOM
<img id="image"/> [...] webvfx.getImage("sourceImage").assignToHTMLImageElement(document.getElementById("image"));
WebVfx includes a simple framework for implementing 2D GLSL fragment shader effects. This requires QtWebKit to be compiled with WebGL enabled. A recent build should be used so the toImageData
feature is available.
The HTML effect should reference the shader.js
JavaScript resource:
<script type="text/javascript" src="qrc:/webvfx/script/shader.js"></script>
The GLSL code can be placed in a script
element:
<script type="x-shader/x-fragment">...</script>
The GLSL must declare a varying texCoord
which carries the texture coordinates from the vertex shader.
varying vec2 texCoord;
It should also declare any uniforms it uses. Uniform values can be set on each render cycle from JavaScript using updateUniform(name,value)
. If the uniform is a sampler2D texture, it should use the ImageData
returned from the WebVfx image object:
shader.updateUniform("sourceTex", webvfx.getImage("sourceImage").toImageData());
See the sample CrossZoom and PageCurl shaders for complete examples:
A couple of simple tools are provided to help authoring effects.
webvfx_browser
(WebVfx Browser.app
on MacOS) is a trivial wrapper around QtWebKit. This makes it easy to visit any website and see if the version of QtWebKit you are using supports various HTML features.
webvfx_viewer
(WebVfx Viewer.app
on MacOS) allows you to load your HTML or QML effects and exposes the webvfx
context object to them, and generates images that your effect can request using webvfx.getImage(name)
. It has a slider along the bottom that lets you control the rendering time (0..1.0) and a tab to let you set the rendering size.