From 2418a1deeedaed6dc71295d9c820d7badc9b95a6 Mon Sep 17 00:00:00 2001 From: Ronghang Hu Date: Mon, 16 Sep 2024 21:30:10 -0700 Subject: [PATCH] add a more detailed description of SAM 2 in README.md --- README.md | 12 ++++++++++-- 1 file changed, 10 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index a6d9576..7963e58 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,14 @@ -## Latest updates +## Latest updates -- SAM 2: Segment Anything in Images and Videos -Please check out our new release on SAM 2 ([demo](https://sam2.metademolab.com/), [repo](https://github.com/facebookresearch/segment-anything-2), [paper](https://ai.meta.com/research/publications/sam-2-segment-anything-in-images-and-videos/)), which enables segmenting anything in **images and videos**! +Please check out our new release on **Segment Anything Model 2 (SAM 2)**. + +* SAM 2 paper: https://arxiv.org/abs/2408.00714 +* SAM 2 code: https://github.com/facebookresearch/segment-anything-2 +* SAM 2 demo: https://sam2.metademolab.com/ + + ![SAM 2 architecture](https://github.com/facebookresearch/segment-anything-2/blob/main/assets/model_diagram.png?raw=true) + +**Segment Anything Model 2 (SAM 2)** is a foundation model towards solving promptable visual segmentation in images and videos. We extend SAM to video by considering images as a video with a single frame. The model design is a simple transformer architecture with streaming memory for real-time video processing. We build a model-in-the-loop data engine, which improves model and data via user interaction, to collect [**our SA-V dataset**](https://ai.meta.com/datasets/segment-anything-video), the largest video segmentation dataset to date. SAM 2 trained on our data provides strong performance across a wide range of tasks and visual domains. # Segment Anything