Recognizing the states of objects in a video is crucial in understanding the scene beyond actions and objects. For instance, an egg can be “raw,” “cracked,” and “whisked” while cooking an omelet, and these states can coexist simultaneously (an egg can be both “raw” and “whisked”). However, most existing research assumes single object state change (e.g., uncracked to cracked), overlooking the coexisting nature of multiple object states and the influence of past states on the current state.
We formulate object state recognition as a multi-label classification task that explicitly handles multiple states. We then propose to learn multiple object states from narrated videos by leveraging large language models (LLMs) to generate pseudo-labels from the transcribed narrations, capturing the influence of past states. The challenge is that narrations mostly describe human actions in the video but rarely explain object states. Therefore, we use LLMs’ knowledge of the relationship between actions and states to derive the missing object states. We further accumulate the derived object states to consider the past state contexts to infer current object state pseudo-labels.
We newly collect the Multiple Object States Transition (MOST) dataset, which includes manual multi-label annotation for evaluation purposes, covering 60 object states across six object categories. Experimental results show that our model trained on LLM-generated pseudo-labels significantly outperforms strong vision-language models, demonstrating the effectiveness of our pseudo-labeling framework that considers past context via LLMs.
We created a new evaluation dataset on temporally localizing the presence of object states. The videos include complicated transition between different states, which are annotated with dense temporal intervals. The dataset covers various object states including those that are not necessarily associated with actions (e.g., straight, dry, smooth).
Target Object: Apple
Target Object: Shirt
Target Object: Apple
Target Object: Egg
Target Object: Flour
Target Object: Shirt
Target Object: Tire
Target Object: Wire
@article{tateno2024learning,
title={Learning Object States from Actions via Large Language Models},
author={Tateno, Masatoshi and Yagi, Takuma and Furuta, Ryosuke and Sato, Yoichi},
journal={arXiv preprint arXiv:2405.01090},
year={2024}
}