Document Type

Dissertation

Degree

Doctor of Philosophy (PhD)

Major/Program

Computer Science

First Advisor's Name

Wei Zeng

First Advisor's Committee Title

committee chair

Second Advisor's Name

Deng Pan

Second Advisor's Committee Title

committee member

Third Advisor's Name

Fahad Saeed

Third Advisor's Committee Title

committee member

Fourth Advisor's Name

Ning Xie

Fourth Advisor's Committee Title

committee member

Fifth Advisor's Name

Yuanchang Sun

Fifth Advisor's Committee Title

committee member

Keywords

Artificial Intelligence, Information Quantification, Interpretation of Artificial Intelligence

Date of Defense

7-2-2021

Abstract

With the great success of the Deep Neural Network (DNN), how to get a trustworthy model attracts more and more attention. Generally, people intend to provide the raw data to the DNN directly in training. However, the entire training process is in a black box, in which the knowledge learned by the DNN is out of control. There are many risks inside. The most common one is overfitting. With the deepening of research on neural networks, additional and probably greater risks were discovered recently. The related research shows that unknown clues can hide in the training data because of the randomization of the data and the finite scale of the training data. Some of the clues build meaningless but explicit links between input data the output data called ``shortcuts''. The DNN makes the decision based on these ``shortcuts''. This phenomenon is also called ``network cheating''. The knowledge of such "shortcuts" learned by DNN ruins all the training and makes the performance of the DNN unreliable. Therefore, we need to control the raw data using in training. Here, we name the explicit raw data as ``content'' and the implicit logic learned by the DNN as ``knowledge'' in this dissertation.

By quantifying the information in DNN's training, we find that the information learned by the network is much less than the information contained in the dataset. It indicates that it is unnecessary to train the neural network with all of the information, which means using partial information for training can also achieve a similar effect of using full information. In other words, it is possible to control the content fed into the DNN, and this strategy shown in this study can reduce the risks (e.g., overfitting and shortcuts) mentioned above. Moreover, use reconstructed data (with partial information) to train the network can reduce the complexity of the network and accelerate the training. In this dissertation, we provide a pipeline to implement content control in DNN's training. We use a series of experiments to prove its feasibility in two applications. One is human brain anatomy structure analysis, and the other is human pose detection and classification.

Identifier

FIDC010269

ORCID

https://orcid.org/0000-0003-3962-9853

Previously Published In

Yang L, Yan L, Zeng W, et al. Train the neural network with abstract images. International Conference on Computing, Communication and Automation 2021. (in press)

Wang Y, Yang L, Yang Y, et al. Review the Knowledge Distillation Phenomenon by Quantifying the Task-related Information, 29th International Conference of Case Based Reasoning. (in press)

Zhang H, Yijun Y, Yang L, et al. Diffeomorphic Registration of 3D Surfaces with Point and Curve Landmarks, Computers & Graphics, Elsevier, (in press)

Yang L, Yang Y, et al. The distance between the weights of the neural network is meaningful. arXiv preprint. arXiv:2102.00396

Yang L, Razib M, He K C, et al. Conformal Welding for Brain-Intelligence Analysis, International Symposium on Visual Computing. Springer, Cham, 2019: 368-380.

Yang L, Yang C. A Randomized Large-scale Voronoi Diagram Construction Algorithm Based on Voronoi Area Primitive.2016 17th International Conference on Geometry and Graphics, 57-59

Creative Commons License

Creative Commons Attribution-Noncommercial 4.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License.

Share

COinS
 

Rights Statement

Rights Statement

In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).