Work

IMPROVING SECURITY AND ROBUSTNESS OF HARDWARE AND LEARNING SYSTEMS

Public

Downloadable Content

Download PDF

Security and robustness are two critical problems in modern computing system. In this disserta- tion, we study these two problems in both hardware system and learning system.Firstly, we discuss the robustness problem in hardware system. Modern microprocessors suffer from significant on-chip variation at the advanced technology nodes. The development of CMOS compatible memristive devices have brought non-volatile capability into silicon technology. We explore new applications for memristive devices to resolve performance degradations that result from process variation. Novel self-healing flip-flop and clock buffers are developed to automatically detect timing violation and to perform timing recovery by tuning the resistance values of memristor devices. To incorporate the circuit techniques into Very Large Scale Integrated (VLSI) circuits design, novel device placement and tuning algorithms have been developed. The proposed design methodology is demonstrated in a 45nm FFT processor design. Our test results show that performance gains of up to 20% can be achieved using the proposed self-healing circuits, with only 1% area overhead. Second, we study the security problem in hardware system. Due to the increasing complexity of design process, outsourcing, and use of third-party blocks, it becomes harder and harder to prevent Trojan insertion and other malicious design modifications. We propose to deploy security invariant as carried proof to prevent and detect Trojans and malicious attacks and to ensure the security of hardware design. We mainly investigate confidentiality as one of the most important ingredient to ensure security in hardware system. Noninterference, in variety of forms, has been thoroughly studied for confidentiality. As a secure information flow problem, plain noninterference verification is explicitly exploited in recent research. However, the plain noninterference is generally considered too strict, thus necessitating a more realistic model relaxed noninterference as a substitution for confidentiality property. We propose a method leveraging indistinguishable relation as relational predicates for self-composed machine to verify relaxed noninterference in hardware system under any given downgrading policy. Then, we investigate the security problem in learning system. Supervised learning on Deep Neural Networks (DNNs) is data hungry. Optimizing performance of DNN in the presence of noisy labels has become of paramount importance since collecting a large dataset will usually bring in noisy labels. Inspired by the robustness of K-Nearest Neighbors (KNN) against data noise, in this work, we propose to apply deep KNN for label cleanup. Our approach leverages DNNs for feature extraction and KNN for ground-truth label inference. We iteratively train the neural network and update labels to simultaneously proceed towards higher label recovery rate and better classification performance. Experiment results show that under the same setting, our approach outperforms existing label correction methods and achieves better accuracy on multiple datasets, e.g., 76.78% on Clothing1M dataset.Finally, we present our work on handling robustness problem in learning system when unex- pected complicated human behaivor is involved. This study is under the framework of federated learning(FL), which is an emerging distributed collaborative learning paradigm adopted by many of today’s applications. However, the success of federated learning relies on the participation of large number of clients with the willingness to contribute their data as much as possible. However, due to privacy concern and expensive participation cost, clients in practice may not contribute all the data they can possibly collect, which as a result would negatively affect the performance of the global model. We propose an incentive mechanism to encourage clients contribute as much data as possible when participating the federated learning. Unlike previous incentive mechanism, our approach does not involve money as reward. We implicitly uses model performance as reward, e.g. big contributors are paid off with better models, and we theoretically prove that with our incentive mechanism, clients will use their largest possible amount of data to participate federated learning under certain conditions.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items