Show simple item record

A LoCATe‐based visual place recognition system for mobile robotics and GPGPUs

dc.contributor.authorBampis, Loukas
dc.contributor.authorChatzichristofis, Savvas A.
dc.contributor.authorIakovidou, Chryssanthi
dc.contributor.authorGasteratos, Antonios
dc.contributor.authorBoutalis, Yiannis
dc.contributor.authorAmanatiadis, Angelos
dc.description.abstractIn this paper, a novel visual Place Recognition approach is evaluated based on a visual vocabulary of the Color and Edge Directivity Descriptor (CEDD) to address the loop closure detection task. Even though CEDD was initially designed so as to globally describe the color and texture information of an input image addressing Image Indexing and Retrieval tasks, its scalability on characterizing single feature points has already been proven. Thus, instead of using CEDD as a global descriptor, we adopt a bottom-up approach and use its localized version, Local Color And Texture dEscriptor, as an input to a state-of-the-art visual Place Recognition technique based on Visual Word Vectors. Also, we use a parallel execution pipeline based on a previous work of ours using the well established General Purpose Graphics Processing Unit (GPGPU) computing. Our experiments show that the usage of CEDD as a local descriptor produces high accuracy visual Place Recognition results, while the parallelization used allows for a real-time implementation even in the case of a low-cost mobile device.en_UK
dc.publisherJohn Wiley & Sons Ltden_UK
dc.relation.ispartofseriesConcurrency and Computation: Practice and Experience;
dc.subjectmobile roboticsen_UK
dc.subjectvisual Place Recognitionen_UK
dc.subjectColor and Edge Directivity Descriptor (CEDD)en_UK
dc.titleA LoCATe‐based visual place recognition system for mobile robotics and GPGPUsen_UK

Files in this item


This item appears in the following Collection(s)

Show simple item record
Except where otherwise noted, this item's license is described as