CommunityNews
Image Diffusion Models Exhibit Emergent Temporal Propagation in Videos
Image diffusion models, though originally developed for image generation, implicitly capture rich semantic structures that enable various recognition and localization tasks beyond synthesis. In this work, we investigate their self-attention maps can be reinterpreted as semantic label propagation kernels, providing robust pixel-level correspondences between relevant image regions. Extending this mechanism across frames yields a temporal propagation kernel that enables zero-shot object tracking via segmentation in videos. We further demonstrate the effectiveness of test-time optimization strategies-DDIM inversion, textual inversion, and adaptive head weighting-in adapting diffusion features for robust and consistent label propagation. Building on these findings, we introduce DRIFT, a framework for object tracking in videos leveraging a pretrained image diffusion model with SAM-guided mask refinement, achieving state-of-the-art zero-shot performance on standard video object segmentation benchmarks.
Read in full here:
Popular Ai topics
Other popular topics
Categories:
Sub Categories:
Popular Portals
- /elixir
- /rust
- /ruby
- /wasm
- /erlang
- /phoenix
- /keyboards
- /python
- /js
- /rails
- /security
- /go
- /swift
- /vim
- /clojure
- /emacs
- /haskell
- /java
- /svelte
- /onivim
- /typescript
- /kotlin
- /c-plus-plus
- /crystal
- /tailwind
- /react
- /gleam
- /ocaml
- /elm
- /flutter
- /vscode
- /ash
- /html
- /opensuse
- /centos
- /php
- /zig
- /deepseek
- /scala
- /sublime-text
- /textmate
- /lisp
- /react-native
- /nixos
- /debian
- /agda
- /kubuntu
- /arch-linux
- /deno
- /django
- /nodejs
- /revery
- /ubuntu
- /spring
- /manjaro
- /diversity
- /lua
- /julia
- /markdown
- /c








