
Meta Segment Anything Model 2
Object segmentation in videos and images
User Rating
3.0
Score
60
Free/Trial Support
Supported
Features
3 Features
Last Updated
Feb 05, 2026
What is Meta Segment Anything Model 2?
Meta Segment Anything Model 2 (SAM 2) is the first unified model for segmenting objects across images and videos. It allows users to select objects in any image or video frame using a click, box, or mask as input. SAM 2 is designed for fast, precise object selection and offers state-of-the-art performance for object segmentation in both images and videos. The models are open source under an Apache 2.0 license.
How to use Meta Segment Anything Model 2?
Users can select objects in images or video frames by providing a click, box, or mask as input. The model then segments the object based on the provided prompt. Additional prompts can be used to refine the model predictions, especially in video frames.
Top Features
- Unified image and video segmentation
- Interactive object selection using clicks, boxes, or masks
- Real-time interactivity and results
- Robust zero-shot performance on unfamiliar videos and images
- State-of-the-art performance for object segmentation
Pros & Cons
No Data
Use Cases
- Selecting and tracking objects across video frames
- Refining object segmentation with additional prompts
- Enabling precise editing capabilities in video generation models
- Creating interactive applications with real-time video processing
User Groups
No Data
Meta Segment Anything Model 2 Pricing
Free PlanSubscription Plan
No detailed pricing information available
Cover Preview

META SEGMENT ANYTHING MODEL 2 Features
- AI Image Segmentation functionalityAI Image Segmentation
- AI Models functionalityAI Models
- Open Source AI Models functionalityOpen Source AI Models