We are developing a dietary assessment system that records daily food intake through the use of food images. Recognizing food in an image is difficult due to large visual variance with respect to eating or preparation conditions. This task becomes even more challenging when different foods have similar visual appearance. In this paper we propose to incorporate two types of contextual dietary information, food co-occurrence patterns and personalized learning models, in food image analysis to reduce ambiguity in food visual appearance and improve food recognition accuracy. We evaluate our model on 1453 food images acquired by 45 participants in natural eating conditions. The result shows that incorporating contextual dietary information improves the food categorization accuracy by about 10%.
Keywords: Contextual Information; Dietary Assessment; Food Recognition; Image Segmentation.