The widespread availability of mobile phones with high quality cameras means that dramatic events around the world can be captured on video and rapidly shared via social media. The goal of this project is to create computer vision and machine learning-based tools and methods to rapidly process and analyze this video. We hope these tools make human rights advocates’ workloads more manageable and less stressful. They are meant to augment the experience and expertise of the human rights researcher, not to replace human judgment or decision-making.
This tool plays multiple videos at the same time based on audio synchronization output or metadata when available. It creates a universal time line and allows for adjustment of time offset when necessary. It also specifies the location for each video on a map when possible.
This tool, based on the work of Enrique Piraces, enables a viewer to analyze a single video in great detail. It provides zoom functionality, frame-by-frame reverse and forward motion, and full frame rotation while the video is playing.
This tool creates a unique sound print for a video clip using an algorithm that recognizes a standardized vocabulary of “features” (such as screaming and explosions). It then compares the sequence of these features in each clip to all others and looks for reasonable matches. These matches are then confirmed by a human.
This tool detects gunshots in the audio track of a video, and lets the analyst know at what time points gunshots likely occurred. It also provides an estimate of the number of gunshot events taking place in the video.