After the multimedia content analysis, the automatic annotation and the editorial procedures fulfilled by the automatic annotation and editorial tool have been realized, the enriched video will already be stored in the system. However, there still exists the problem of how, when and where to present the enriched content to the end user(s) / consumer(s). The content now has many extra informational hooks that make it easy to for example show relevant ads or related content but that is all static information.
At this point, the Multi-screen Tool provides a decision engine that can take the enriched metadata, advertisements, live detected social data and information of the current context to create a scenario and display it on the available screen. This means a new relationship is created, binding the content provider and SMEs who want to integrate their content. Unlike before where the SME would buy the rights of the content with a minimal set of meta-data they will now buy content that comes with enriched time-based metadata complete with an API that provides services over the active use. For example providing signals if new related content is found or when a topic that is part of the licensed collection becomes trending on twitter. It also allows SMEs to send signals back into the system like comments viewing signals (play, pause, bookmarking) to allow the social & personalization part to react and improve the ongoing user experience.
Once we made the jump that SMEs license these active collections that are a combination of the content, enrichments and scenario engine with signals coming in and out, we further need to support them in implementing and maintaining this new and more complex paradigm. This is done in three ways. Initially, we provide easy to use APIs into our SaaS services that interact with the services being able to create new interactive frontends. Secondly, for SMEs with limited technical background, we provide a frontend framework with easily adaptable pre-build building blocks. These building blocks will use the APIs in the platform and turn them into ready to use features.
Examples of these building blocks could be login methods and user profile parts, book-marking, commenting, voting and sharing elements and other parts needed in multi-screen applications. The last part is that we provide a IDE that allows SMEs to combine these elements into multi-screen applications and host them on their own server cluster or host them on a cloud-based server cluster.
Within the toolkit we will follow the following design goals:
- Provide APIs, modules and communication layers to reduce implementation time
- Allow usage of known protocols like MPEG DASH, DIAL, uPnP ,Bonjour with DRM if needed
- Allow for inclusion of APIs in this area (w3c screens API, HbbTV, fragments API)
- Allow modules from other projects partners are involved in like LinkedTV, EUScreen and several europeana API/Labs projects to be used/accessed
- Be platform neutral and support desktop, smartphones, tablets and tv screens
- Add management tools/building blocks so content owners and advertisers can tune the modules and scenarios engine
The video below demonstrates how the Multi-screen tool actually works.