Saturday, July 4, 2009

Georeferenced 'continuous media'

Georeferenced 'continuous media' is another way of saying that this audio or video you have your attention on has been referenced to include some spatial information about where it describes or where it was captured.

There is heaps of stuff emerging now that can accomplish this. The problem is that no one yet knows which 'format' will eventually percolate to the top to become deigned as the application of 'best practice'. Notwithstanding, here's some applications and techniques that are currently jumping up and down saying "pick me!"

Annodex Industrial strength.
From Wikipedia, the free encyclopedia
Annodex is a digital media format developed by CSIRO to provide annotation and indexing of continuous media, such as audio and video. It is based on the Ogg container format, with an XML language called CMML (Continuous Media Markup Language) providing additional metadata. It is intended to create a Continuous Media Web (CMWeb), whereby continuous media can be manipulated in a similar manner to text media on the World Wide Web, including searching and dynamic



Overview

While Web search engines are solving the problem of wading through large sets of textual documents for finding a bit of required information, there is currently no standardised way on the Web for finding clips in time-continuous documents such as audio and video. There is not even a way to address temporal offsets into such files and surf away from clips or link into clips using URIs.The CMWeb project is enabling the searching and surfing of clips of audio and video, providing solutions both to the consumer market and the professional market. Just like the World Wide Web, this technology gains its full economic potential only when available to everybody on the Internet. And also just like the World Wide Web, it is opening up new areas of research and new applications for our existing research into information extraction and delivery.

Of note here is that Annodex is Drupal friendly; once an Institution or organisation commences construction of georeferenced media a powerful multimedia database will be needed to build and streme customised content on the fly. In our Memestreme project situation for example we can imagine that ultimately some org or other will be building instantly customised Red Centre Way Augmented reality tours for any given visiting Pancultural-e demographic.
"Hello, I'm Naurelle a 48 year old married professional with family in tow botanist from South Africa, interested in flora, fauna and geology and I want to buy one of your iPhone Augmented Reality tours [memestreme] for stage one of the Red Centre Way"...
"Greetings my good man, I'm Shameless, a single-ish 29 year old larger lout from Manchester who is interested in Traditional Mythology, meteorology and Rally Driving, do you have an AR tour to suit?"

Deep geotagging of videos – Motionbox Idiot proof, web 2.0 friendly.
This link helps by visualizing the enabling of separate editable data channels that run in the background as the movie plays. The question then becomes 'can our hardware and software be adjusted to parse the geo-locative information contained therein?'

Motionbox, slightly different video sharing site, and one I really like, have introduced their planned “Deep Tagging” of videos. What the allows could be quite revolutionary. There are two ways that this works, as present. A drop down box under the video screen area, that allows a user to jump to segments of the video (useful for chapter style navigation), and a timeline section showing at-a-glance, with thumbnails, the deep tags within the video. Tags can also overlap over the same parts of the video. From a geospatial perspective this could be really powerful: A montage of nice bars in your town, each location being both geotagged and descriptive. A geoRSS feed of videos and parts of videos for an area. Links within the video to a map…
Videos can be located – not only that, but within the video, parts can also be located. Feed wise, to be able to grab a feed showing what videos, and parts of videos are around your area (like flickr photos) would be great. Where are there movies (or deep, hidden sections of a movie) that are tagged “London”. What parts of the world are people interested in.
I’ve made a little example from my recent trip to the Grindelwald area of Swiss Alps.



From the ICT centre at CSIRO

Continuous Media Web - Comparison to existing technologies

Comparison to existing technologies

1. How does the CMWeb technology differ from MPEG-21?

MPEG-21 is building an open framework for multimedia delivery and consumption. It thus focuses on addressing how to generically describe a set of content documents (called a "digital item") that belong together from a semantic point of view, including all the information necessary to provide services on these digital items. As an example, consider a music CD album. When it is turned into a "digital item", the album gets described in an XML document that contains references to the cover image, the text on the CD cover, the text on an accompagnying brochure, references to a set of audio files that contain the songs on the CD, ratings of the album, rights associated with the album, information on the different encoding formats in which the music can be retrieved, different bitrates that can be supported when downloading etc. This description supports everything that you would want to do with a digital CD album: it allows you to manage it as an entity, describe it with meta data, exchange it with others, and collect it as an entity.

In comparison, the CMWeb is focusing on a much smaller task. It looks only at time-continuous data files, it allows to create meta information for clips of that data file, and it allows to incorporate this meta information in a time-synchronous manner into the bitstream. Its only aim is to integrate time-continuous data files into the existing World Wide Web by making clips accessible through URIs and searchable through textual search engines. So, the music CD example would be represented in the CMWeb as one large audio file on a Web server that consists of a concatenation of the songs of that album and has some XML markup interspersed into it at the relevant points where a new song starts. There will be textual meta information in the bitstream that describes the different songs allowing them to be searched through a Web search engine. This is considered as one Web resource. There may be hyperlinks in that file to other Web resources that represent the cover image and the accompagnying brochure, but they are not part of the Web resource. Therefore, it is not possible to describe the kind of entity that is represented in an MPEG-21 digital item through the CMWeb. But by focusing squarely on time-continuous data files only, by providing a markup language that is similar to HTML, by enabling the time-synchronous storage of that markup in the media bitstream and by extending the URI linking scheme to address clips of time-continuous data files, we can leverage of existing Web infrastructure. We expect that Annodex media will become part of the formats that MPEG-21 digital items can hold.

2. How does the CMWeb technology differ from MPEG-7?

MPEG-7 is an open framework for describing multimedia content. It provides a large set of description schemes to create markup in XML format. MPEG-7's markup is not restricted to textual information only - in fact it is tailored to allow for the description of audio-visual content with low-level image and audio features as extracted through signal processing methods. It also has basically no resemblance to HTML as it was not built with a particular aim on Web applications only.

Instead, the CMWeb technology provides for a HTML-like textual markup of time-continuous data files in its markup language CMML. It provides for an inclusion of its markup into the time-continuous data stream in its Annodex file format, which is not provided for in MPEG-7. It provides for the URI addressing of clips of time-continuous data files through an extension of the URI fragment addressing scheme. However, annotations created in MPEG-7 may be referenced from inside an Annodex format bitstream, and some may even be included directly into the CMML of an Annodex format bitstream through the "meta" and "desc" tags.