Browse Source

add a more detailed description of SAM 2 in README.md

main
Ronghang Hu 10 months ago
parent
commit
2418a1deee
  1. 12
      README.md

12
README.md

@ -1,6 +1,14 @@ @@ -1,6 +1,14 @@
## Latest updates
## Latest updates -- SAM 2: Segment Anything in Images and Videos
Please check out our new release on SAM 2 ([demo](https://sam2.metademolab.com/), [repo](https://github.com/facebookresearch/segment-anything-2), [paper](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/)), which enables segmenting anything in **images and videos**!
Please check out our new release on **Segment Anything Model 2 (SAM 2)**.
* SAM 2 paper: https://arxiv.org/abs/2408.00714
* SAM 2 code: https://github.com/facebookresearch/segment-anything-2
* SAM 2 demo: https://sam2.metademolab.com/
![SAM 2 architecture](https://github.com/facebookresearch/segment-anything-2/blob/main/assets/model_diagram.png?raw=true)
**Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https://ai.meta.com/datasets/segment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains.
# Segment Anything

Loading…
Cancel
Save