Representation learning with Less Label and Imperfect Data

dc.contributor.advisorNguyen, Hien Van
dc.contributor.committeeMemberMayerich, David
dc.contributor.committeeMemberHan, Zhu
dc.contributor.committeeMemberFaghih, Rose T.
dc.contributor.committeeMemberShah, Shishir Kirit
dc.creatorMobiny, Aryan
dc.date.accessioned2021-08-12T16:44:32Z
dc.date.createdDecember 2020
dc.date.issued2020-12
dc.date.submittedDecember 2020
dc.date.updated2021-08-12T16:44:37Z
dc.description.abstractDeep learning has attracted tremendous attention from researchers in various fields of information engineering such as AI, computer vision, and language processing. The power of deep learning stems from the ability to learn representations optimized for a specific task, as opposed to relying on hand-crafted features. To yield favorable results, deep models often require a large number of annotated examples for training. However, the data annotating process is expensive, prone to noisy information and human errors, and time-consuming. Moreover, in many applications (such as in medical fields), this process requires domain knowledge and expertise, therefore, often unable to produce a sufficient number of labels for deep networks to flourish. In this work, we develop practical tools to improve the prediction performance of deep neural networks, utilizing less label and imperfect data. In the first part of the thesis we develop the theory to estimate deep neural network prediction uncertainty which measures what the model does not know due to the lack of training data. We tie approximate inference in Bayesian models to DropConnect and other stochastic regularisation techniques and assess the approximations empirically. We further demonstrate the tools’ practicality by making use of the suggested techniques in image processing, natural scene understanding, and medical diagnostics. We exploit Capsule Networks (CapsNets), an alternative proposed to address some of the fundamental issues with training convolutional neural networks (CNNs). We propose novel connectivity techniques and routing mechanisms to extend the use of CapsNets to large-scale, high-dimensional datasets. Our experimental results on several image classification datasets demonstrate that CapsNets compare favorably to CNNs when the training size is large, but significantly outperform CNNs on small size datasets. In the final part of the thesis, we propose a memory-augmented capsule network (MEMCAPS) for the rapid adaptation of computer-aided diagnosis models to new domains. It consists of a CapsNet that is meant to extract compact features from some high-dimensional input, and a memory-augmented task network meant to exploit its stored knowledge from the target domains. Our observations show that MEMCAPS is able to efficiently adapt to unseen domains using only a few annotated samples.
dc.description.departmentElectrical and Computer Engineering, Department of
dc.format.digitalOriginborn digital
dc.format.mimetypeapplication/pdf
dc.identifier.citationPortions of this document appear in: Mobiny, Aryan, and Hien Van Nguyen. "Fast capsnet for lung cancer screening." In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 741-749. Springer, Cham, 2018.; Mobiny, Aryan, Supratik Moulik, and Hien Van Nguyen. "Lung cancer screening using adaptive memory-augmented recurrent networks." arXiv preprint arXiv:1710.05719 (2017).; Mobiny, Aryan, Hengyang Lu, Hien V. Nguyen, Badrinath Roysam, and Navin Varadarajan. "Automated classification of apoptosis in phase contrast microscopy using capsule network." IEEE transactions on medical imaging 39, no. 1 (2019): 1-10.; Mobiny, Aryan, Pengyu Yuan, Supratik K. Moulik, Naveen Garg, Carol C. Wu, and Hien Van Nguyen. "Dropconnect is effective in modeling uncertainty of bayesian deep networks." Scientific reports 11, no. 1 (2021): 1-14.; Mobiny, Aryan, Aditi Singh, and Hien Van Nguyen. "Risk-aware machine learning classifier for skin lesion diagnosis." Journal of clinical medicine 8, no. 8 (2019): 1241.; Mobiny, Aryan, Pengyu Yuan, Pietro Antonio Cicalese, and Hien Van Nguyen. "DECAPS: Detail-Oriented Capsule Networks." In International Conference on Medical Image Computing and Computer-Assisted Intervention, pp. 148-158. Springer, Cham, 2020.; Mobiny, Aryan, Pietro Antonio Cicalese, Samira Zare, Pengyu Yuan, Mohammadsajad Abavisani, Carol C. Wu, Jitesh Ahuja, Patricia M. de Groot, and Hien Van Nguyen. "Radiologist-level covid-19 detection using ct scans with detail-oriented capsule networks." arXiv preprint arXiv:2004.07407 (2020).
dc.identifier.urihttps://hdl.handle.net/10657/8077
dc.language.isoeng
dc.rightsThe author of this work is the copyright owner. UH Libraries and the Texas Digital Library have their permission to store and provide access to this work. UH Libraries has secured permission to reproduce any and all previously published materials contained in the work. Further transmission, reproduction, or presentation of this work is prohibited except with permission of the author(s).
dc.subjectcapsule network
dc.subjectdeep learning
dc.subjectmachine learning
dc.subjectcomputer vision
dc.subjectneural network
dc.titleRepresentation learning with Less Label and Imperfect Data
dc.type.dcmiText
dc.type.genreThesis
local.embargo.lift2022-12-01
local.embargo.terms2022-12-01
thesis.degree.collegeCullen College of Engineering
thesis.degree.departmentElectrical and Computer Engineering, Department of
thesis.degree.disciplineElectrical Engineering
thesis.degree.grantorUniversity of Houston
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
MOBINY-DISSERTATION-2020.pdf
Size:
51.22 MB
Format:
Adobe Portable Document Format
License bundle
Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.43 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.81 KB
Format:
Plain Text
Description: