Document Type



Doctor of Philosophy (PhD)


Electrical Engineering

First Advisor's Name

Wujie Wen

First Advisor's Committee Title

Committee Chair

Second Advisor's Name

Arif I. Sarwat

Second Advisor's Committee Title

Committee Member

Third Advisor's Name

Gang Quan

Third Advisor's Committee Title

Committee Member

Fourth Advisor's Name

Jason Liu

Fourth Advisor's Committee Title

Committee Member

Fifth Advisor's Name

A. Selcuk Ulugac

Fifth Advisor's Committee Title

Committee Member


Computer and Systems Architecture

Date of Defense



The Digital Era is now evolving into the Intelligence Era, driven overwhelmingly by the revolution of Deep Neural Network (DNN), which opens the door for intelligent data interpretation, turning the data and information into actions that create new capabilities, richer experiences, and unprecedented economic opportunities, achieving game-changing outcomes spanning from image recognition, natural language processing, self-driving cars to biomedical analysis. Moreover, the emergence of deep learning accelerators and neuromorphic computing further pushes DNN computation from cloud to the edge devices for the low-latency scalable on-device neural network computing. However, such promising embedded neural network computing systems are subject to various technical challenges. First, performing high-accurate inference for complex DNNs requires massive amounts of computation and memory resources, causing very limited energy efficiency for existing computing platforms. Even the brain-inspired spiking neuromorphic computing architecture which originates from the more bio-plausible spiking neural network (SNN) and relies on the occurrence frequency of a large number of electrical spikes to represent the data and perform the computation, is subject to significant limitations on both energy efficiency and processing speed. Second, although many memristor-based DNN accelerators and emerging neuromorphic accelerators have been proposed to improve the performance-per-watt of embedded DNN computing with the highly parallelizable Processing-in-Memory (PIM) architecture, one critical challenge faced by these memristor-based designs is their poor reliability. A DNN weight, which is represented as the memristance of a memristor cell, can be easily distorted by the inherent physical limitations of memristor devices, resulting in significant accuracy degradation. Third, DNN computing systems are also subject to ever-increasing security concerns. Attackers can easily fool a normally trained DNN model by exploiting the algorithmic vulnerabilities of DNN classifiers through adversary examples to mislead the inference results. Moreover, system vulnerabilities in open-sourced DNN computing frameworks such as heap overflow are increasingly exploited to either distort the inference accuracy or corrupt the learning environment. This dissertation focuses on designing efficient, reliable, and secured neural network computing systems. An architecture and algorithm co-design approach is presented to address the aforementioned design pillars from a system-level perspective, namely efficiency, reliability and security. Three case study examples centered around each design pillar, including Single-spike Neuromorphic Accelerator, Fault-tolerant DNN Accelerator, and Mal-DNN: Malicious DNN-powered Stegomalware, are discussed in this dissertation, offering the community an alternative thinking about developing more efficient, reliable and secure deep learning systems.




Files over 15MB may be slow to open. For best results, right-click and select "Save as..."



Rights Statement

Rights Statement

In Copyright. URI:
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).