## Latest updates -- SAM 2: Segment Anything in Images and Videos
Please check out our new release on SAM 2 ([demo](https://sam2.metademolab.com/), [repo](https://github.com/facebookresearch/segment-anything-2), [paper](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/)), which enables segmenting anything in **images and videos**!
Please check out our new release on **Segment Anything Model 2 (SAM 2)**.
* SAM 2 paper: https://arxiv.org/abs/2408.00714
* SAM 2 code: https://github.com/facebookresearch/segment-anything-2
**Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https://ai.meta.com/datasets/segment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains.