09-09-2016, 10:14 AM
1453971306-LearningtoRankUsingUserClicksalgo.docx (Size: 15.42 KB / Downloads: 4)
To demonstrate the effectiveness of the visual and click features based learning to rank (VCLTR) model, several experiments over a large-scale image search dataset are conducted. There are two important issues in proposing a novel image ranking model. First, the ranking of images is determined according to the interactions between those images. The ranking result is a structured list, but traditional learning algorithms cannot handle the structured result. Second, unlike click features, which are extracted according to specific query, visual features are obtained from images regardless of queries. Therefore, the traditional learning to rank approaches cannot be used directly. Accordingly, we propose a new objective function for our learning to rank model under the framework of large margin structured output learning. Specifically, there are two terms in the objective function: the click features are integrated in terms of a linear model, and the visual features are considered in terms of a hypergraph regularizer which captures high-order relationships in building the graph.
According to a new input query qt with ranking and visual features , we can predict the corresponding ranking model with parameter w. We obtain the ranking prediction result by solving the problem with w replaced by w, which is max.