Connect with us

Film & Video

MuseMe Launches First AI-Driven Framework for Interactivity in Video Streams




Livestreaming Pioneers Philipp Angele and Marc Cymontkowski Introduce Video-on-Demand Interactive Product Ideal for E-Commerce, Social Media Creators; Live Product Now in Beta

BERLIN, GERMANY – MuseMe, an interactive video platform, today announced the launch of its first product, which enables video library owners to create fully interactive objects throughout video streams. MuseMe has developed the first AI-driven framework for enabling interactivity in video, using AI object detection to identify, maskand label objects within videos, making them clickable for the viewer. With MuseMe, objects in video are inherently interactive, regardless of their appearance in a video. MuseMe is targeted toward e-commerce merchants, social media content creators, and enterprise professionals who seek intelligent interactive solutions for existing video workflow.

Founded by Philipp Angele, a livestreaming pioneer, who led Wowza into the cloud domain and incubated the company based on his research at LivePeer, MuseMe also includes CTO Marc Cymontkowski, the founder of SRT (Secure Reliable Transport), one of the most impactful building blocks of modern streaming. Together, their work has uncovered a massive breakthrough in what is a foundational interactive layer for a wide range of content producers and distributors, from broadcasters to web3 developers. 

Marc Cymontkowski will discuss and demonstrate MuseMe at the National Association of Broadcasters (NAB) show in Las Vegas, from Saturday, April 13 through Wednesday, April 17.

The first product from MuseMe is a free VOD platform making any YouTube video interactive.  MuseMe will launch a beta for its livestreaming product shortly that will allow automated interactivity for live content creators. For a demo of the MuseMe interactive video product, sign up at

“The value of true interactivity in video has been a holy grail for more than 25 years, and now MuseMe is the first provider of these services,” said Angele. “We could not have done this work without the benefit of open source AI, and a keen understanding of how open world object detection impacts the processing of video streams. We have wanted to deliver this kind of solution for years; this is a watershed moment in the video industry.”

MuseMe is currently working on an Apple Vision Pro demo that will enable users to stream to their viewers live while anything they see is interactive. This way, viewers can instantly tell the creators what they prefer and creators can see the voting results of their audience. This kind of “always aware” livestream intelligence is ideal for the next generation of augmented reality (AR) environments.  

“We enable interactivity in video that we never had before, by applying new models from the AI world. The intelligence that we have can create real-time identification of all kinds of objects; we can get the shapes of the objects, and the metadata and make that available,” said Cymontkowski. “The real game-changer is the ability to run this interactive experience on all devices, all players and within the established streaming specs to produce what is very much like a videogame; the user can control actions and can do this without taxing the server resources of MuseMe, since we only carry the metadata – not the actual video. We are finally able to automate video object detection, which is a breakthrough moment for interactivity in clickable web content.”  For years, promises of interactive video solutions permeated the market, in the 1990s and 2000s, particularly in the entertainment industry, where “buy Jennifer Aniston’s sweater” from a scene from the popular ‘Friends’ t.v. show was a common demo. Now, interactive engagement in video will become a standard feature.