Natural Language Processing is about taking the languages used by humans in written or spoken form, with all its ambiguity and richness to convey crisp logic, rhetorics, sentiments and bring it into a machine representation.
Lets look at Video Captioning through NLP. Generating captions for image has been done with desirable results but work for captioning videos that follows the actions to form a sort of storyboard is still ongoing.
Motivation for doing this can be for:
- Prediction of actions.
- Security Camera Log.
- Generating summary of the video.
- Audio description can help blind people if applied in real time using their viewpoint as input.
- Allowing for text search in video which takes you to a certain point in the timeline automatically based on query.
The general followed workflow is like this:
The Video is segmented into frames based on Action Localization / Scene Transition / Image Slices. These Frames are processed for noise removal & feature extraction. This is fed into CNN or RNN or both, to generate descriptive captions. The captions are contextually independent and are linked together to form a narrative using NLP techniques such as Coreference resolution, Connective word generation to identify nouns, verbs, etc. across the captions.
This technique is also useful to aligning caption transcript to a video automatically. Using the same procedure and identifying the aligning captions in transcript with captions generated using above techniques.
The above area is highly applicable in the rich social network of Twitter, Facebook, Youtube, etc. that we live in.
References:
- BEYOND CAPTION TO NARRATIVE: VIDEO CAPTIONING WITH MULTIPLE SENTENCES
- Crossing the Boundary Between NLP & Computer Vision
- Automatic Captioning in YouTube
- Describing Videos with Neural Networks
Comments
Post a Comment