Today, we’re excited to announce several updates to our industry-leading Accelerated Annotation solution. Joining the roster of annotation types served by this product are early access releases of keypoint annotation and video understanding. These improvements expand our ability to support ML teams through tech-enabled, human-in-the-loop labeling solutions.
Keypoint Annotation
Keypoint annotation is used to plot characteristics in data, such as eyes and nose in an image used for facial recognition. One of the benefits of keypoint annotation is that you get a more nuanced understanding of the spatial relationships between structures in an image which can help solve more complex computer vision tasks.
Another benefit of our keypoint annotation tool is that it naturally supports occluded or hidden elements in a structure. For example, with a face in profile, you can either label a facial element that is occluded if you roughly know where that element is, or you can skip it entirely if you’re not sure.
Some common use cases of keypoint annotation are:
- Movement tracking and prediction
- Facial expression and gesture recognition
- Human and animal pose estimation
- Robotic instruments and action tracking
- Sports analytics
Keypoint annotation is ripe for automation as it can be fairly time intensive to walk through each element (for example: right eye, right shoulder, right elbow, right wrist, and so on) that needs to be annotated. We are excited to continue accelerating these more nuanced annotation types through our industry-leading technology and best-in-class workforce.
Video Understanding
Video understanding, or video activity recognition, capabilities support use cases focused on the identification or classification of activities, actions, or events within streaming video. This new capability allows data analysts to stream videos directly in our platform, identify onscreen elements using hotkeys to maximize efficiency, perform manual review and editing, and export into frames for use cases that require it.
This feature can be used for use cases such as:
- Scene understanding and classification
- Action segmentation
- Human event recognition
- Action detection and surveillance
We’re excited to be able to support this complex annotation type through our Accelerated Annotation platform and workforce.
This capability helps us take the burden off internal teams where one hour of video can take up to 800 hours to annotate.
SAM Improvements
In addition to these new annotation types, we’ve continued to improve our Segment Anything Model integration with our Accelerated Annotation solution. We’ve added additional prompt modes, viewport capabilities, and access to three mask levels to improve the efficiency, speed, and quality of manual annotation when required.
Get in touch for a demo of our new features, and stay tuned for more news from CloudFactory!
Video Annotation Data Labeling Computer Vision Auto Labeling Data Annotation Tools AI & Machine Learning