Images Similarity based on Bags of SIFT Descriptor and K-Means Clustering
DOI:
https://doi.org/10.31358/techne.v18i02.217Keywords:
scale invariant feature transform, k-means clustering, image similarityAbstract
The content based image retrieval is developed and receives many attention from computer vision, supported by the ubiquity of Internet and digital devices. Bag-of-words method from text-based image retrieval trains images’ local features to build visual vocabulary. These visual words are used to represent local features, then quantized before clustering into number of bags. Here, the scale invariant feature transform descriptor is used as local features of images that will be compared each other to find their similarity. It is robust to clutter and partial visibility compared to global feature. The main goal of this research is to build and use a vocabolary to measure image similarity accross two tiny image datasets. K-means clustering algorithm is used to find the centroid of each cluster at different k values. From experiment results, the bag-of-keypoints method has potential to be implemented in the content based information retrieval.
Downloads
References
[2] G. Csurka , C. R. Dance , L. Fan , J. Willamowski, and C. Bray, Visual categorization with bags of keypoints, In Workshop on Statistical Learning in Computer Vision, European Conference on Computer Vision, pp. 1-22, 2004.
[3] L. Zhu, A. Rao and A. Zhang, Theory of Keyblock-based image retrieval, ACM Transactions on Information Systems, vol. 20, no. 2, pp. 224-257, 2002.
[4] M. Varma and A. Zisserman, Classifying materials from images: to cluster or not to cluster?, European Conference on Computer Vision, 2002.
[5] D. G. Lowe, Object Recognition from local scale–invariant features, International Conference on Computer Vision, 1999.
[6] N. Ali, et al., A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF, PLoS ONE, Vol. 11, No. 6, 2016.
[7] X. Tian, L. Jiao, X. Liu, X. Zhang, Feature integration of EODH and Color-SIFT: Application to image retrieval based on codebook, Signal Processing: Image Communication, Vol. 29, No. 4, pp. 530–545, 2014.
[8] Q. Du, V. Faber, and M. Gunzburger, Centroidal Voronoi Tesselations: Applications and Algorithms, SIAM Rev., vol. 41, pp. 637-676, 1999.
[9] F.P. Preparata, M.I. Shamos, Computational Geometry: An Introduction, third ed. Springer-Verlag, 1990.
[10] E. Karakasis, A. Amanatiadis, A. Gasteratos, S. Chatzichristofis, Image moment invariants as local features for content based image retrieval using the Bag-of-Visual-Words model, Pattern Recognition Letters, Vol. 55, pp. 22-27, 2015.
[11] K. Mikolajczyk and C. Schmid, An affine invariant interest point detector, European Conference on Computer Vision, 2002.
[12] K. Mikolajczyk and C. Schmid, A performance evaluation of local descriptors, Conference on Computer Vision and Pattern Recognition, 2003.
[13] X. Lou, D. Huang, L. Fan, and A. Xu, An Image Classification Algorithm Based on Bag of Visual Words and Multi-kernel Learning, Journal of Multimedia, Vol. 9, No. 2, 2014.
[14] D. Giveki1, M. A. Soltanshahi, F. Shiri, H. Tarrah, A New SIFT-Based Image Descriptor Applicable for Content Based Image Retrieval, Journal of Computer and Communications, Vol. 3, pp. 66-73, 2015.
[15] M. Kieu, K. D. Lai, T. D. Tran, and T. H. Le, A Fusion of Bag of Word Model and Hierarchical K-Means++ in Image Retrieval, International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making, LNCS, Vol. 9978, 2016.
[16] Z. Liu, H. Li, W. Zhou, R. Hong, Q. Tian, Uniting Keypoints: Local Visual Information Fusion for Large-Scale Image Search, IEEE Transactions on Multimedia, Vol. 17, No. 4, pp. 538–548, 2015.
[17] P. Górecki, K. Sopy?a, and P. Drozda, Ranking by K-Means Voting Algorithm for Similar Image Retrieval, International Conference on Artificial Intelligence and Soft Computing, LNCS, Vol. 7267, pp. 509-517, 2012.