FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy

ABSTRACT: This paper reports on a convolutional neural network (CNN) – based regression model, called FocusNET, to predict the accurate reconstruction distance of raw holograms in Digital Lensless Holographic Microscopy (DLHM). This proposal provides a physical-mathematical formulation to extend its...

Full description

Autores:
Pabón Vidal, Adriana Lucía
García Sucerquia, Jorge Iván
Gómez Ramírez, Alejandra
Herrera Ramírez, Jorge Alexis
Buitrago Duque, Carlos Andrés
Lopera Acosta, María Josef
Montoya, Manuel
Trujillo Anaya, Carlos Alejandro
Tipo de recurso:
Article of investigation
Fecha de publicación:
2023
Institución:
Universidad de Antioquia
Repositorio:
Repositorio UdeA
Idioma:
eng
OAI Identifier:
oai:bibliotecadigital.udea.edu.co:10495/42049
Acceso en línea:
https://hdl.handle.net/10495/42049
Palabra clave:
Aprendizaje Profundo
Deep Learning
Microscopía
Microscopy
https://id.nlm.nih.gov/mesh/D000077321
https://id.nlm.nih.gov/mesh/D008853
Rights
openAccess
License
http://creativecommons.org/licenses/by-nc-nd/2.5/co/
id UDEA2_74280ec1baa4fae8f2587f6a89a195d6
oai_identifier_str oai:bibliotecadigital.udea.edu.co:10495/42049
network_acronym_str UDEA2
network_name_str Repositorio UdeA
repository_id_str
dc.title.spa.fl_str_mv FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
title FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
spellingShingle FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
Aprendizaje Profundo
Deep Learning
Microscopía
Microscopy
https://id.nlm.nih.gov/mesh/D000077321
https://id.nlm.nih.gov/mesh/D008853
title_short FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
title_full FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
title_fullStr FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
title_full_unstemmed FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
title_sort FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopy
dc.creator.fl_str_mv Pabón Vidal, Adriana Lucía
García Sucerquia, Jorge Iván
Gómez Ramírez, Alejandra
Herrera Ramírez, Jorge Alexis
Buitrago Duque, Carlos Andrés
Lopera Acosta, María Josef
Montoya, Manuel
Trujillo Anaya, Carlos Alejandro
dc.contributor.author.none.fl_str_mv Pabón Vidal, Adriana Lucía
García Sucerquia, Jorge Iván
Gómez Ramírez, Alejandra
Herrera Ramírez, Jorge Alexis
Buitrago Duque, Carlos Andrés
Lopera Acosta, María Josef
Montoya, Manuel
Trujillo Anaya, Carlos Alejandro
dc.contributor.researchgroup.spa.fl_str_mv Grupo Malaria
dc.subject.decs.none.fl_str_mv Aprendizaje Profundo
Deep Learning
Microscopía
Microscopy
topic Aprendizaje Profundo
Deep Learning
Microscopía
Microscopy
https://id.nlm.nih.gov/mesh/D000077321
https://id.nlm.nih.gov/mesh/D008853
dc.subject.meshuri.none.fl_str_mv https://id.nlm.nih.gov/mesh/D000077321
https://id.nlm.nih.gov/mesh/D008853
description ABSTRACT: This paper reports on a convolutional neural network (CNN) – based regression model, called FocusNET, to predict the accurate reconstruction distance of raw holograms in Digital Lensless Holographic Microscopy (DLHM). This proposal provides a physical-mathematical formulation to extend its use to different DLHM setups than the optical and geometrical conditions utilized for recording the training dataset; this unique feature is tested by applying the proposal to holograms of diverse samples recorded with different DLHM setups. Additionally, a comparison between FocusNET and conventional autofocusing methods in terms of processing times and accuracy is provided. Although the proposed method predicts reconstruction distances with approximately 54 µm standard deviation, accurate information about the samples in the validation dataset is still retrieved. When compared to a method that utilizes a stack of reconstructions to find the best focal plane, FocusNET performs 600 times faster, as no hologram reconstruction is needed. When implemented in batches, the network can achieve up to a 1200-fold reduction in processing time, depending on the number of holograms to be processed. The training and validation datasets, and the code implementations, are hosted on a public GitHub repository that can be freely accessed.
publishDate 2023
dc.date.issued.none.fl_str_mv 2023
dc.date.accessioned.none.fl_str_mv 2024-09-12T00:02:33Z
dc.date.available.none.fl_str_mv 2024-09-12T00:02:33Z
dc.type.spa.fl_str_mv Artículo de investigación
dc.type.coar.spa.fl_str_mv http://purl.org/coar/resource_type/c_2df8fbb1
dc.type.redcol.spa.fl_str_mv https://purl.org/redcol/resource_type/ART
dc.type.coarversion.spa.fl_str_mv http://purl.org/coar/version/c_970fb48d4fbd8a85
dc.type.driver.spa.fl_str_mv info:eu-repo/semantics/article
dc.type.version.spa.fl_str_mv info:eu-repo/semantics/publishedVersion
format http://purl.org/coar/resource_type/c_2df8fbb1
status_str publishedVersion
dc.identifier.issn.none.fl_str_mv 0143-8166
dc.identifier.uri.none.fl_str_mv https://hdl.handle.net/10495/42049
dc.identifier.doi.none.fl_str_mv 10.1016/j.optlaseng.2023.107546
dc.identifier.eissn.none.fl_str_mv 1873-0302
identifier_str_mv 0143-8166
10.1016/j.optlaseng.2023.107546
1873-0302
url https://hdl.handle.net/10495/42049
dc.language.iso.spa.fl_str_mv eng
language eng
dc.relation.ispartofjournalabbrev.spa.fl_str_mv Opt. Lasers Eng.
dc.relation.citationendpage.spa.fl_str_mv 10
dc.relation.citationstartpage.spa.fl_str_mv 1
dc.relation.citationvolume.spa.fl_str_mv 165
dc.relation.ispartofjournal.spa.fl_str_mv Optics and Lasers in Engineering
dc.rights.uri.*.fl_str_mv http://creativecommons.org/licenses/by-nc-nd/2.5/co/
dc.rights.uri.spa.fl_str_mv https://creativecommons.org/licenses/by-nc-nd/4.0/
dc.rights.accessrights.spa.fl_str_mv info:eu-repo/semantics/openAccess
dc.rights.coar.spa.fl_str_mv http://purl.org/coar/access_right/c_abf2
rights_invalid_str_mv http://creativecommons.org/licenses/by-nc-nd/2.5/co/
https://creativecommons.org/licenses/by-nc-nd/4.0/
http://purl.org/coar/access_right/c_abf2
eu_rights_str_mv openAccess
dc.format.extent.spa.fl_str_mv 10 páginas
dc.format.mimetype.spa.fl_str_mv application/pdf
dc.publisher.spa.fl_str_mv Elsevier
dc.publisher.place.spa.fl_str_mv Londres, Inglaterra
institution Universidad de Antioquia
bitstream.url.fl_str_mv https://bibliotecadigital.udea.edu.co/bitstreams/b0aa88f4-57fd-4bf6-a460-a2eb36f87367/download
https://bibliotecadigital.udea.edu.co/bitstreams/06be6d99-0000-4553-b2c0-a1673f5fa203/download
https://bibliotecadigital.udea.edu.co/bitstreams/6602758f-dbcb-4b48-8bc8-9196f526b682/download
https://bibliotecadigital.udea.edu.co/bitstreams/b88e9417-aa39-4fdb-9909-16efbaae19ce/download
https://bibliotecadigital.udea.edu.co/bitstreams/0859a173-1b5d-4ce1-8d3d-128b8eb13d5f/download
bitstream.checksum.fl_str_mv 114fb18d068555167d901a60ecd4804c
b88b088d9957e670ce3b3fbe2eedbc13
8a4605be74aa9ea9d79846c1fba20a33
bf9597714f078d7a9ca09cf666aaf737
8a843947867585fd6703ba839a649146
bitstream.checksumAlgorithm.fl_str_mv MD5
MD5
MD5
MD5
MD5
repository.name.fl_str_mv Repositorio Institucional de la Universidad de Antioquia
repository.mail.fl_str_mv aplicacionbibliotecadigitalbiblioteca@udea.edu.co
_version_ 1851052646082805760
spelling Pabón Vidal, Adriana LucíaGarcía Sucerquia, Jorge IvánGómez Ramírez, AlejandraHerrera Ramírez, Jorge AlexisBuitrago Duque, Carlos AndrésLopera Acosta, María JosefMontoya, ManuelTrujillo Anaya, Carlos AlejandroGrupo Malaria2024-09-12T00:02:33Z2024-09-12T00:02:33Z20230143-8166https://hdl.handle.net/10495/4204910.1016/j.optlaseng.2023.1075461873-0302ABSTRACT: This paper reports on a convolutional neural network (CNN) – based regression model, called FocusNET, to predict the accurate reconstruction distance of raw holograms in Digital Lensless Holographic Microscopy (DLHM). This proposal provides a physical-mathematical formulation to extend its use to different DLHM setups than the optical and geometrical conditions utilized for recording the training dataset; this unique feature is tested by applying the proposal to holograms of diverse samples recorded with different DLHM setups. Additionally, a comparison between FocusNET and conventional autofocusing methods in terms of processing times and accuracy is provided. Although the proposed method predicts reconstruction distances with approximately 54 µm standard deviation, accurate information about the samples in the validation dataset is still retrieved. When compared to a method that utilizes a stack of reconstructions to find the best focal plane, FocusNET performs 600 times faster, as no hologram reconstruction is needed. When implemented in batches, the network can achieve up to a 1200-fold reduction in processing time, depending on the number of holograms to be processed. The training and validation datasets, and the code implementations, are hosted on a public GitHub repository that can be freely accessed.COL000752410 páginasapplication/pdfengElsevierLondres, Inglaterrahttp://creativecommons.org/licenses/by-nc-nd/2.5/co/https://creativecommons.org/licenses/by-nc-nd/4.0/info:eu-repo/semantics/openAccesshttp://purl.org/coar/access_right/c_abf2FocusNET: An autofocusing learning‐based model for digital lensless holographic microscopyArtículo de investigaciónhttp://purl.org/coar/resource_type/c_2df8fbb1https://purl.org/redcol/resource_type/ARThttp://purl.org/coar/version/c_970fb48d4fbd8a85info:eu-repo/semantics/articleinfo:eu-repo/semantics/publishedVersionAprendizaje ProfundoDeep LearningMicroscopíaMicroscopyhttps://id.nlm.nih.gov/mesh/D000077321https://id.nlm.nih.gov/mesh/D008853Opt. Lasers Eng.101165Optics and Lasers in EngineeringPublicationORIGINALPabonAdriana_2023_FocusNET_Lensless_Microscopy.pdfPabonAdriana_2023_FocusNET_Lensless_Microscopy.pdfArtículo de investigaciónapplication/pdf2937529https://bibliotecadigital.udea.edu.co/bitstreams/b0aa88f4-57fd-4bf6-a460-a2eb36f87367/download114fb18d068555167d901a60ecd4804cMD51trueAnonymousREADCC-LICENSElicense_rdflicense_rdfapplication/rdf+xml; charset=utf-8823https://bibliotecadigital.udea.edu.co/bitstreams/06be6d99-0000-4553-b2c0-a1673f5fa203/downloadb88b088d9957e670ce3b3fbe2eedbc13MD52falseAnonymousREADLICENSElicense.txtlicense.txttext/plain; charset=utf-81748https://bibliotecadigital.udea.edu.co/bitstreams/6602758f-dbcb-4b48-8bc8-9196f526b682/download8a4605be74aa9ea9d79846c1fba20a33MD53falseAnonymousREADTEXTPabonAdriana_2023_FocusNET_Lensless_Microscopy.pdf.txtPabonAdriana_2023_FocusNET_Lensless_Microscopy.pdf.txtExtracted texttext/plain58765https://bibliotecadigital.udea.edu.co/bitstreams/b88e9417-aa39-4fdb-9909-16efbaae19ce/downloadbf9597714f078d7a9ca09cf666aaf737MD54falseAnonymousREADTHUMBNAILPabonAdriana_2023_FocusNET_Lensless_Microscopy.pdf.jpgPabonAdriana_2023_FocusNET_Lensless_Microscopy.pdf.jpgGenerated Thumbnailimage/jpeg15510https://bibliotecadigital.udea.edu.co/bitstreams/0859a173-1b5d-4ce1-8d3d-128b8eb13d5f/download8a843947867585fd6703ba839a649146MD55falseAnonymousREAD10495/42049oai:bibliotecadigital.udea.edu.co:10495/420492025-03-27 01:35:37.95http://creativecommons.org/licenses/by-nc-nd/2.5/co/open.accesshttps://bibliotecadigital.udea.edu.coRepositorio Institucional de la Universidad de Antioquiaaplicacionbibliotecadigitalbiblioteca@udea.edu.coTk9URTogUExBQ0UgWU9VUiBPV04gTElDRU5TRSBIRVJFClRoaXMgc2FtcGxlIGxpY2Vuc2UgaXMgcHJvdmlkZWQgZm9yIGluZm9ybWF0aW9uYWwgcHVycG9zZXMgb25seS4KCk5PTi1FWENMVVNJVkUgRElTVFJJQlVUSU9OIExJQ0VOU0UKCkJ5IHNpZ25pbmcgYW5kIHN1Ym1pdHRpbmcgdGhpcyBsaWNlbnNlLCB5b3UgKHRoZSBhdXRob3Iocykgb3IgY29weXJpZ2h0Cm93bmVyKSBncmFudHMgdG8gRFNwYWNlIFVuaXZlcnNpdHkgKERTVSkgdGhlIG5vbi1leGNsdXNpdmUgcmlnaHQgdG8gcmVwcm9kdWNlLAp0cmFuc2xhdGUgKGFzIGRlZmluZWQgYmVsb3cpLCBhbmQvb3IgZGlzdHJpYnV0ZSB5b3VyIHN1Ym1pc3Npb24gKGluY2x1ZGluZwp0aGUgYWJzdHJhY3QpIHdvcmxkd2lkZSBpbiBwcmludCBhbmQgZWxlY3Ryb25pYyBmb3JtYXQgYW5kIGluIGFueSBtZWRpdW0sCmluY2x1ZGluZyBidXQgbm90IGxpbWl0ZWQgdG8gYXVkaW8gb3IgdmlkZW8uCgpZb3UgYWdyZWUgdGhhdCBEU1UgbWF5LCB3aXRob3V0IGNoYW5naW5nIHRoZSBjb250ZW50LCB0cmFuc2xhdGUgdGhlCnN1Ym1pc3Npb24gdG8gYW55IG1lZGl1bSBvciBmb3JtYXQgZm9yIHRoZSBwdXJwb3NlIG9mIHByZXNlcnZhdGlvbi4KCllvdSBhbHNvIGFncmVlIHRoYXQgRFNVIG1heSBrZWVwIG1vcmUgdGhhbiBvbmUgY29weSBvZiB0aGlzIHN1Ym1pc3Npb24gZm9yCnB1cnBvc2VzIG9mIHNlY3VyaXR5LCBiYWNrLXVwIGFuZCBwcmVzZXJ2YXRpb24uCgpZb3UgcmVwcmVzZW50IHRoYXQgdGhlIHN1Ym1pc3Npb24gaXMgeW91ciBvcmlnaW5hbCB3b3JrLCBhbmQgdGhhdCB5b3UgaGF2ZQp0aGUgcmlnaHQgdG8gZ3JhbnQgdGhlIHJpZ2h0cyBjb250YWluZWQgaW4gdGhpcyBsaWNlbnNlLiBZb3UgYWxzbyByZXByZXNlbnQKdGhhdCB5b3VyIHN1Ym1pc3Npb24gZG9lcyBub3QsIHRvIHRoZSBiZXN0IG9mIHlvdXIga25vd2xlZGdlLCBpbmZyaW5nZSB1cG9uCmFueW9uZSdzIGNvcHlyaWdodC4KCklmIHRoZSBzdWJtaXNzaW9uIGNvbnRhaW5zIG1hdGVyaWFsIGZvciB3aGljaCB5b3UgZG8gbm90IGhvbGQgY29weXJpZ2h0LAp5b3UgcmVwcmVzZW50IHRoYXQgeW91IGhhdmUgb2J0YWluZWQgdGhlIHVucmVzdHJpY3RlZCBwZXJtaXNzaW9uIG9mIHRoZQpjb3B5cmlnaHQgb3duZXIgdG8gZ3JhbnQgRFNVIHRoZSByaWdodHMgcmVxdWlyZWQgYnkgdGhpcyBsaWNlbnNlLCBhbmQgdGhhdApzdWNoIHRoaXJkLXBhcnR5IG93bmVkIG1hdGVyaWFsIGlzIGNsZWFybHkgaWRlbnRpZmllZCBhbmQgYWNrbm93bGVkZ2VkCndpdGhpbiB0aGUgdGV4dCBvciBjb250ZW50IG9mIHRoZSBzdWJtaXNzaW9uLgoKSUYgVEhFIFNVQk1JU1NJT04gSVMgQkFTRUQgVVBPTiBXT1JLIFRIQVQgSEFTIEJFRU4gU1BPTlNPUkVEIE9SIFNVUFBPUlRFRApCWSBBTiBBR0VOQ1kgT1IgT1JHQU5JWkFUSU9OIE9USEVSIFRIQU4gRFNVLCBZT1UgUkVQUkVTRU5UIFRIQVQgWU9VIEhBVkUKRlVMRklMTEVEIEFOWSBSSUdIVCBPRiBSRVZJRVcgT1IgT1RIRVIgT0JMSUdBVElPTlMgUkVRVUlSRUQgQlkgU1VDSApDT05UUkFDVCBPUiBBR1JFRU1FTlQuCgpEU1Ugd2lsbCBjbGVhcmx5IGlkZW50aWZ5IHlvdXIgbmFtZShzKSBhcyB0aGUgYXV0aG9yKHMpIG9yIG93bmVyKHMpIG9mIHRoZQpzdWJtaXNzaW9uLCBhbmQgd2lsbCBub3QgbWFrZSBhbnkgYWx0ZXJhdGlvbiwgb3RoZXIgdGhhbiBhcyBhbGxvd2VkIGJ5IHRoaXMKbGljZW5zZSwgdG8geW91ciBzdWJtaXNzaW9uLgo=