Please use this identifier to cite or link to this item: http://biblioteca.unisced.edu.mz/handle/123456789/2705
Full metadata record
DC FieldValueLanguage
dc.contributor.authorMori, Minoru-
dc.date.accessioned2023-04-20T10:23:57Z-
dc.date.available2023-04-20T10:23:57Z-
dc.date.issued2012-06-27-
dc.identifier.citation102pgen_US
dc.identifier.urihttp://biblioteca.unisced.edu.mz/handle/123456789/2705-
dc.description.abstractIn the field of document recognition and understanding, whereas scanned paper documents were previously the only recognition target, various new media such as camera-captured documents, videos, and natural scene images have recently started to attract attention because of the growth of the Internet/WWW and the rapid adoption of low-priced digital cameras/videos. The keys to the breakthrough include character detection from complex backgrounds, discrimination of characters from noncharacters, modern or ancient unique font recognition, fast retrieval technique from large-scaled scanned documents, multi-lingual OCR, and unconstrained handwriting recognition. This book aims to present recent advances, applications, and new ideas that are relevant to document recognition and understanding, from technical topics such as image processing, feature extraction, or classification, to new applications like camera-based recognition or character-based natural scene analysis. The goal of this book is to provide a new trend and a reference source for academic research and for professionals working in the document recognition and understanding field.en_US
dc.language.isoenen_US
dc.publisherInTechen_US
dc.subjectAutomatismen_US
dc.subjectCharactersen_US
dc.subjectDigitalizationen_US
dc.subjectDocumentationen_US
dc.titleRecent Advances in Document Recognition and Understandingen_US
dc.typeBooken_US
Appears in Collections:Inteligência Artificial

Files in This Item:
File Description SizeFormat 
Recent_advances_in_document_recognition_and_understanding.pdf4.17 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.