Date of this Version
In visual search systems, it is important to address the issue of how to leverage the rich contextual information in a visual computational model to build more robust visual search systems and to better satisfy the user’s need and intention. In this paper, we introduced a ranking model by understanding the complex relations within product visual and textual information in visual search systems. To understand their complex relations, we focused on using graph-based paradigms to model the relations among product images, product category labels, and product names and descriptions. We developed a unified probabilistic hypergraph ranking algorithm, which, modeling the correlations among product visual features and textual features, extensively enriches the description of the image. We conducted experiments on the proposed ranking algorithm on a dataset collected from a real e-commerce website. The results of our comparison demonstrate that our proposed algorithm extensively improves the retrieval performance over the visual distance based ranking.
Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 License.
Kaiman Zeng, Nansong Wu, Arman Sargolzaei, and Kang Yen, “Learn to Rank Images: A Unified Probabilistic Hypergraph Model for Visual Search,” Mathematical Problems in Engineering, vol. 2016, Article ID 7916450, 7 pages, 2016. doi:10.1155/2016/7916450
In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).