Once the basic multimedia content is enriched by these automatic annotation methods, it is already well known that some of the newly created meta-data is incorrect or just not on topic. Automatic detection is never 100% correct and content owners should not expect a high correctness rate. The reasons for this shortcoming a numerous, but in general belong to two categories, i.e. false-negative, failing to identify an object, and false-positive, identified object is irrelevant to the context. For example, when the automatic annotation tool detects a visit of Barak Omaba to Berlin for RBB (i.e. the TV station in the Berlin area), the system will detect the location Berlin and add meta-information about Berlin like the Wikipedia page. But explaining what Berlin is to Berliners is useless. For both these two cases it is clear that an editorial tool is needed in order to allow the content provider to quickly judge the work that has been done by the system and remove/fix and if need be add missing elements to ensure the quality of the newly created meta-data is of a good enough value to be passed to the next phase in the project. The editorial tool will be a web-based interface that will be built by using the same multiscreen tool as the one for the end users but only accessible by SMEs users for predefined content collections. It will use the APIs defined in the earlier phases to retrieve the content and enrichments to the editorial staff to be worked on either in private or within a editorial group. The system will track the changes made by the editorial staff per collection and won’t override but extend the enrichment data with importance and correctness values into a new collection to be used by the SMEs through the APIs or the other workflow tools in this project.