Doctor of Philosophy (PhD)
First Advisor's Name
First Advisor's Committee Title
Second Advisor's Name
Second Advisor's Committee Title
Third Advisor's Name
Third Advisor's Committee Title
Fourth Advisor's Name
Fourth Advisor's Committee Title
Fifth Advisor's Name
Fifth Advisor's Committee Title
Interpretable AI, AI, Computer Systems, Public policy
Date of Defense
Advances in Artificial Intelligence (AI) have led to spectacular innovations and sophisticated systems for tasks that were thought to be capable only by humans. Examples include playing chess and Go, face and voice recognition, driving vehicles, and more. In recent years, the impact of AI has moved beyond offering mere predictive models into building interpretable models that appeal to human logic and intuition because they ensure transparency and simplicity and can be used to make meaningful decisions in real-world applications. A second trend in AI is characterized by important advancements in the realm of causal reasoning. Identifying causal relationships is an important aspect of scientific endeavors in a variety of fields. Causal models and Bayesian inference can help us gain better domain-specific insight and make better data-driven decisions because of their interpretability. The main objective of this dissertation was to adapt theoretically sound AI-based interpretable data-analytic approaches to solve domain-specific problems in the two un-related fields of Storage Systems and Public Policy. For the first task, we considered the well-studied problem of cache replacement problem in computing systems, which can be modeled as a variant of the well-known Multi-Armed Bandit (MAB) problem with delayed feedback and decaying costs, and developed an algorithm called EXP4-DFDC. We proved theoretically that EXP4-DFDC exhibits an important feature called vanishing regret. Based on the theoretical analysis, we designed a machine-learning algorithm called ALeCaR, with adaptive hyperparameters. We used extensive experiments on a wide range of workloads to show that ALeCaR performed better than LeCaR, the best machine learning algorithm for cache replacement at that time. We concluded that reinforcement machine learning can offer an outstanding approach for implementing cache management policies. For the second task, we used Bayesian networks to analyze the service request data from three 311 centers providing non-emergency services in the cities of Miami-Dade, New York City, and San Francisco. Using a causal inference approach, this study investigated the presence of inequities in the quality of the 311 services to neighborhoods with varying demographics and socioeconomic status. We concluded that the services provided by the local governments showed no detectable biases on the basis of race, ethnicity, or socioeconomic status.
Previously Published In
Farzana Yusuf, Shaoming Cheng, Sukumar Ganapati and Giri Narasimhan. “Causal Inference Methods and their Challenges: The Case of 311 Data”. In Proceedings of 22nd Annual International Conference on Digital Government Research, (2021).
Liana Valdes*, Farzana Yusuf*, Steven Lyons, Eysler Paz, Raju Rangaswami, Jason Liu, Ming Zhao Giri Narasimhan. “Learning Cache Replacement with Cacheus”. In Proceedings of 19th USENIX conference on File and Storage Technologies, (2021), * stands for equal contribution.
Farzana Yusuf, Vitalii Stebliankin, Giuseppe Vietri, Giri Narasimhan. “Cache Replacement as a MAB with Delayed Feedback and Decaying Costs”. LXAI, NeurIPS workshop for LatinX in AI Research (2019), arXiv preprint arXiv:2009.11330 (2021).
Yusuf, Farzana Beente, "Interpretability of AI in Computer Systems and Public Policy" (2021). FIU Electronic Theses and Dissertations. 4753.
In Copyright. URI: http://rightsstatements.org/vocab/InC/1.0/
This Item is protected by copyright and/or related rights. You are free to use this Item in any way that is permitted by the copyright and related rights legislation that applies to your use. For other uses you need to obtain permission from the rights-holder(s).