Latest News

BRIDGET will hold 3 workshops during the project, the first, in December 2013, was organised with external contributors and companies/organisations with the goal to collect and document additional use scenarios, and to present and discuss technologies already available at partners for further development in the project. The output of this workshop was an extensive list of candidate use scenarios for possible demonstration of BRIDGET tools, and a preliminary mapping of technologies (both current and future) to the scenarios.

The second workshop will take place half way through the project, allowing for external feedback on the tools BRIDGET is producing. This feedback will help guide the work of the second half of the project which will be showcased and discussed in the 3rd and final workshop towards the end of the project.

If you are in the broadcast industry and interested in taking part in these workshops then please contact us.

If you are already a participant then you can access the workshop pages for details of past and future workshops.

Project Co-ordinator

Miroslaw Bober, Centre for Vision, Speech and Signal Processing, University of Surrey, UK.

Technical Co-ordinator

Marius Preda, Institut Mines-Télécom, FRANCE

Contact

info@ict-bridget.eu

Follow Us

BRIDGET will open new dimensions for multimedia content creation and consumption by enhancing broadcast programmes with bridgets: links from the programme you are watching to external interactive media elements such as web pages, images, audio clips, different types of video (2D, multi-view, with depth information, free viewpoint) and synthetic 3D models.

Bridgets can be:

To deliver the above, BRIDGET will develop:

The AT and player will use a range of sophisticated and innovative technologies extending state-of-the-art media analysis and visual search, and 3D scene reconstruction, which will enable customised and context-adapted hybrid broadcast/Internet services offering enhanced interactive, multi-screen, social and immersive content for new forms of AR experiences. BRIDGET tools will be based on and contribute to international standards, thus ensuring the creation of a true horizontal market and ecosystem for connected TV and contributed media applications.

BRIDGET is a 36 month project which runs from 1st November 2013 until 31st October 2016.

EC-funded STREP, Grant Agreement 610691

Example Use Scenarios

BRIDGET is strongly driven by new content creation and consumption models. Use scenarios are of great importance to BRIDGET, which is why the overall strategy of the work plan is orchestrated around a feedback loop aimed at their refinement and validation. We present below a reduced list of initial concept use scenarios that we have already identified as candidates for implementation with BRIDGET technologies. We will use the first BRIDGET workshop to refine this list with industry stakeholders and three cases will be selected for implementation in a sequence of increasing functionality and technological complexity.

Use Scenario 1: Enhanced News and Crowd Journalism

Franco is an Italian born in a small city in Emilia, now working abroad. Watching a news broadcast, he learns of a big earthquake in his region and is anxious to know the effect on his city. Instead of using his tablet to get more information by searching the Internet (which is cumbersome, as he needs to type keywords and process a large number of results to find the ones specifically relevant to his city), Franco uses the tablet empowered by a new service called BRIDGET, whose free “app” he has downloaded. The service considers the tablet as an extension of the main TV screen and provides bridgets, i.e., links to relevant content on the Internet (text, images, 3D models, videos), synchronized with the broadcast programme. Franco can easily select any object from the video and obtain on the tablet images and videos containing the same object.

Franco immediately uses his tablet to access bridgets embedded in the broadcast program. In less than a second he gets eyewitness videos filmed in his town and uploaded on web repositories, but also videos of other broadcasters relating the disaster. By accessing his own image library, he selects an image with his house and, in few seconds, all the broadcast content with video shots containing the house are displayed on the tablet. He can consider himself lucky: his house is still standing and has not suffered substantial damage.

But Franco is also sensitive to alternative ideas, analyses and opinions: he is a fan of crowd journalism, where ordinary people relate or comment on events and music. With BRIDGET, the users witnessing an event, such as a music concert, can link live videos, images and interviews that they produce or are aware of, to broadcast programmes. Thanks to the BRIDGET approach, for example, broadcast music video clips can be linked to live recorded concerts of the same artist, where the 3D audio scene is reconstructed in order to provide an immersive experience of the event. BRIDGET is therefore becoming a powerful recommendation system, where social communities with common type of interests are formed.

Use Scenario 2: Expanding Films Dimension

Anna, a film fan, is watching “La dolce vita”, directed in 1960 by the great F. Fellini. Many know Fellini as a major film director, but few know his passion for Rome’s architecture. Although Anna has already watched this film several times, she is doing it again this evening, since it is being broadcast by RAI, who advertised it as a completely new film experience. During the “Fontana di Trevi” scene, her BRIDGET-enabled tablet “wakes up”: Anna notices that the fountain becomes an interactive object, and by clicking on it she is offered other films by Fellini in the RAI archives, with other fountains in Rome (actually, fountains were a “leitmotif” in Fellini’s entire filmography). What about churches? Why are they not becoming interactive? Anna draws a red rectangle around a church in Fellini’s film and the tablet initiates a free search. Visual descriptors extracted from the red box are sent to a database where a match is found identifying “Santa Maria in Via”, also in Rome. BRIDGET yields images, videos and 3D objects representing the church and the fountain from various sources (movies, images from web and professional archives). BRIDGET also provides a free-viewpoint 3D scene that Anna can interact with to examine the details of the structure. While navigating around the scene, Anna can focus on the audio content by zooming in the video or changing her listening point, to enjoy the real sound of the fountain or hear the bell tolling. Indeed, Fellini used catholic themes and imagery extensively throughout his film career, and the BRIDGET-enabled tablet shows that many other film directors and a amateurs also recorded views from Santa Maria in Via.

Anna loves to easily and seamlessly access data related to the films she is watching. She generally chooses programmes including bridgets to historical information, alternative programmes and/or 3D reconstructions. When she accesses user-generated content in social networking sites, she particularly likes to be placed in a 3D model of the scene and watch it by “jumping” from spot to spot in the 3D world, each spot corresponding to the viewpoint of the person who recorded and uploaded the media.

Use Scenario 3: Holiday Images and Videos

Emily has just come back from holidays in Crete. She is very happy to see that BBC will be broadcasting a travel programme about Crete and is keen to share her photos and videos with her friends and broader com-munity. She already uploaded them to a well-known web image repository but they are not getting enough “likes”, so she tries the new BRIDGET mini AT. She uploads her photos and videos to the BRIDGET web-site, which BBC uses to find user community content relevant to its broadcasts, and a large number of them are matched to the BBC programme. The BBC editors preparing the travel programme link them to content to be broadcast to create panoramas and even a 3D model of a famous lighthouse. Emily awaits very impatiently the release date and finds it an exciting experience to see her own pictures being used as a part of the official BBC broadcast. All her friends send her messages which are displayed on her second screen.

Use Scenario 4: Virtual Media Extension of the Broadcast Screen

Celine, Mandy and Tracy are in their living room watching a TV documentary about the medieval city centre of Patan. They are interested in the detailed wooden carvings of the side buildings and would like to know whether other buildings, temples and monuments have the same filigree façades.

They click on their individual BRIDGET-enabled tablets to watch a separate, extended version overlaid with additional data and augmented with synthetic 3D scene elements, in which it is possible to navigate beyond the scene currently shown on TV. Each of them can now pan and zoom to watch a specific part of Patan’s city centre. Celine is especially interested in one of the temples shown on TV, and chooses to see it from different viewpoints. Thanks to the user interface overlaid on top of the current view, she navigates around it and receives additional views taken from other TV documentaries or from people who have shared their photos and videos on the Internet. The BRIDGET player even prompts Celine to watch, aligned with the current TV pictures, one of the videos taken by Mandy and Tracy during an earlier visit to Patan and uploaded to YouTube.

Use Scenario 5: Family Edutainment

The Rossi family, on the request of Giovanni and his dad, has just bought a BRIDGET-enabled panoramic TV-set with companion terminals (tablets) for each family member. Giovanni is only eight years old and wants to watch his favourite TV programme, “Balia Bea”, with all the very special interactive features available on the tablet — all kids in his school love to play with bridgets which are so interactive and fun!

Yesterday, Giovanni sent a self-drawn picture of Balia Bea to the programme’s website, and would like to see his “masterpiece” on TV — actually he would like even more that his parents and friends see it… Giovanni starts watching the Melevisione programme: objects and characters are activated by touching them on the tablet. Giovanni touches Balia Bea and a menu appears on the screen that lets him choose between different kinds of enrichment: (boring) text with character information, or (nice) pictures of Balia Bea made by other kids. A list of small icons appears at the bottom of the screen, where Giovanni finds his picture, and the tablet proposes to vote for the best one, which will be shown to everybody at the end of the programme. Giovanni votes of course for his picture — but, unfortunately, he can only do it once… Now, the programme shows a scene where the main character is telling a tale about a cat, and in the background there is a “clickable big ball tales dispenser” of tales about animals that Giovanni can read or print.

Bridgets are fun for parents too. The drawing contest is over and Giovanni’s picture was among the top ten. Balia Bea announced the surprise: all ten selected pictures will be in a printed book! Proud of her son, Giovanni’s mother clicks on her tablet and orders the Balia Bea book for Giovanni’s birthday. Just one more click and the link to the book is tweeted to her friends and family. As for his dad, when Balia Bea was talking about the cake to put inside her picnic hamper, he interacted with the ball dispenser in the background, which was a “recipes dispenser” at that point in the programme. Clicking on it he chose a cake recipe to make with Giovanni for dessert at the weekend.


Centre For Vision Speech and Signal Processing
University of Surrey
www.surrey.ac.uk/CVSSP
CEDEO SAS di Chiariglione Leonardo e C.
www.cedeo.net/
Heinrich Hertz Institute
Fraunhofer Gesellschaft zur
Foerderung der Angewandten Forschung E.V.
www.fraunhofer.de/
Huawei Technologies Düsseldorf GmbH
www.huawei.com/
Institut Mines-Télécom
www.mines-telecom.fr/
RAI – Radiotelevisione Italiana
www.rai.it/
Telecom Italia S.p.A.
www.telecomitalia.com/
Grupo de Tratamiento de Imágenes
Universidad Politécnica de Madrid
www.gti.ssr.upm.es/
Visual Atoms
visualatoms.com/

Peer Reviewed Publications

2016
[12] Pablo Carballeira, Julián Cabrera, Fernando Jaureguizar, Narciso García, "Analysis of the depth-shift distortion as an estimator for view synthesis distortion", In Signal Processing: Image Communication, vol. 41, no. , pp. 128 - 143, 2016. [bib] [pdf] [doi]
2015
[11] E. Vidal, N. Piotto, G. Cordara, F.M. Burgos, "Automatic video to point cloud registration in a structure-from-motion framework", In Image Processing (ICIP), 2015 IEEE International Conference on, pp. 2646-2650, 2015. [bib] [doi]
[10] Daniel Berjón, Guillermo Gallego, Carlos Cuevas, Francisco Morán, Narciso N. García, "Optimal Piecewise Linear Function Approximation for GPU-Based Applications", In Cybernetics, IEEE Transactions on, vol. PP, no. 99, pp. 1-12, 2015. [bib] [doi]
[9] I. Feldmann S. García A. Messina S. Paschalakis M. Bober, "BRIDGET: an approach at sustainable and efficient production of second screen media applications", In Proc. IBC (Intl. Broadcasting Convention) 2015, 2015. [bib] [pdf]
[8] Rafael Pagés, Sergio García, Daniel Berjón, Francisco Morán, "SPLASH: A Hybrid 3D Modeling/Rendering Approach Mixing Splats and Meshes", In Proceedings of the 20th International Conference on 3D Web Technology, ACM, New York, NY, USA, pp. 231-234, 2015. [bib] [pdf] [doi]
[7] Sergio García, Rafael Pagés, Daniel Berjón, Francisco Morán, "Textured Splat-based Point Clouds for Rendering in Handheld Devices", In Proceedings of the 20th International Conference on 3D Web Technology, ACM, New York, NY, USA, pp. 227-230, 2015. [bib] [pdf] [doi]
[6] Alberto Messina, Francisco Morán Burgos, Marius Preda, Skjalg Lepsoy, Miroslaw Bober, Davide Bertola, Stavros Paschalakis, "Making Second Screen Sustainable in Media Production: The BRIDGET Approach", In Proceedings of the ACM International Conference on Interactive Experiences for TV and Online Video, ACM, New York, NY, USA, pp. 155-160, 2015. [bib] [pdf] [doi]
[5] Guillermo Gallego, Anthony Yezzi, "A Compact Formula for the Derivative of a 3-D Rotation in Exponential Coordinates", In Journal of Mathematical Imaging and Vision, Springer US, vol. 51, no. 3, pp. 378-384, 2015. [bib] [doi]
[4] R. Pagés, D. Berjón, F. Morán, N. García, "Seamless, Static Multi-Texturing of 3D Meshes", In Computer Graphics Forum, vol. 34, no. 1, pp. 228-238, 2015. [bib] [doi]
2014
[3] L. Bertinetto, M. Balestri, S. Lepsøy, G. Francini, M. Bober, "Telecom Italia at TRECVID2014 - Instance Search Task”", In Notebook Papers and Slides, 2014 TREC Video Retrieval Evaluation, Florida,US, 2014. [bib] [pdf]
[2] N. Piotto, G. Cordara, "Statistical modelling for enhanced outlier detection", In Image Processing (ICIP), 2014 IEEE International Conference on, pp. 4280-4284, 2014. [bib] [doi]
[1] S. Husain, M. Bober, "Robust and scalable aggregation of local features for ultra large-scale retrieval", In Image Processing (ICIP), 2014 IEEE International Conference on, pp. 2799-2803, 2014. [bib] [doi]

Public Deliverables

2015
[11] Alberto Messina, Fulvio Negro, Christian Tulvan, "Validation Framework - Version A", 2015. [bib] [pdf]
[10] Giovanni Cordara, Nicola Piotto, Sergio García Lobo, Francisco Morán Burgos, Davide Bertola, Leonardo Chiariglione, Peter Grosche, Marius Preda, Milos Markovic, Alberto Messina, Adrian Gabrielli, Veronica Scurtu, "BRIDGET Authoring Tools and Player -Report - Version A", 2015. [bib] [pdf]
[9] Ingo Feldmann, Nicola Piotto, Daniel Berjón Díez, Rafael Pagés Scasso, Sergio García Lobo, Francisco Morán Burgos, Giovanni Cordara, Milos Markovic, Sascha Ebel, Wolfgang Waizenegger, "3D Media Tools - Report - Version A", 2015. [bib] [pdf]
[8] Miroslaw Bober, Gianluca Francini, Syed Husain, Skjalg Lepsoy, Simone Madeo, Stavros Paschalakis, "Visual Search Tools - Report - Version A", 2015. [bib] [pdf]
[7] Stavros Paschalakis, Massimo Balestri, Miroslaw Bober, Gianluca Francini, Syed Husain, Skjalg Lepsoy, Alberto Messina, Maurizio Montagnuolo, Karol Wnukowicz, "Media Analysis Tools - Report - Version A", 2015. [bib] [pdf]
[6] A. Messina, F. Negro, E. Gargiulo, L. Longo, E. Guercio, "User Validation - Version A", 2015. [bib] [pdf]
[5] Helen Cooper, Miroslaw Bober, "Second Annual Report", 2015. [bib] [pdf]
2014
[4] Miroslaw Bober, Leonardo Chiariglione, Giovanni Cordara, Gianluca Francini, Diego Gibellino, Francisco Morán Burgos, Karsten Müller, Stavros Paschalakis, Marius Preda, Alberto Messina, Fulvio Negro, Roberto Del Pero, "Dissemination and Standardisation Plan", 2014. [bib] [pdf]
[3] Gianmatteo Perrone, Diego Gibellino, Davide Bertola, Leonardo Chiariglione, Alberto Messina, Giovanni Cordara, Nicola Piotto, Ingo Feldmann, Francisco Morán Burgos, Adrian Gabrielli, Christian Tulvan, Veronica Scurtu, Marius Preda, Stavros Paschalakis, Miroslaw Bober, "System Architecture and Interfaces - Version A", 2014. [bib] [pdf]
[2] Miroslaw Bober, Leonardo Chiariglione, Giovanni Cordara, Diego Gibellino, Alberto Messina, Francisco Morán Burgos, Karsten Müller, Stavros Paschalakis, Marius Preda, "First BRIDGET workshop and Use Scenarios - Version A", 2014. [bib] [pdf]
[1] Miroslaw Bober, Leonardo Chiariglione, Giovanni Cordara, Diego Gibellino, Alberto Messina, Francisco Morán Burgos, Ingo Feldmann, Stavros Paschalakis, Marius Preda, "BRIDGET Public Annual Report", 2014. [bib] [pdf]

Miscellaneous

F. Morán (32:55-37:15) and other researchers from ETSIT-UPM talk on RNE(Radio Nacional de España, Spanish national public radio) (MP3)

G. Francini: "La ricerca visuale e lo standard MPEG CDVS", talk at the Video Intelligence Conference, Milan, April 2015.

MLAF proposal slides from MPEG 110 - Presented by Alberto Messina (RAI)

Powered by bibtexbrowser