Researcher explores vulnerabilities of AI systems to online misinformation
A University of Texas at Arlington researcher is working to increase the security of natural language generation (NLG) systems, such as those used by ChatGPT, to guard against misuse and abuse that could allow the spread of misinformation online. Credit: UT Arlington A University of Texas at Arlington researcher is working to increase the security […]
A University of Texas at Arlington researcher is working to increase the security of natural language generation (NLG) systems, such as those used by ChatGPT, to guard against misuse and abuse that could allow the spread of misinformation online.
Credit: UT Arlington
A University of Texas at Arlington researcher is working to increase the security of natural language generation (NLG) systems, such as those used by ChatGPT, to guard against misuse and abuse that could allow the spread of misinformation online.
Shirin Nilizadeh, assistant professor in the Department of Computer Science and Engineering, has earned a five-year, $567,609 Faculty Early Career Development Program (CAREER) grant from the National Science Foundation (NSF) for her research. Understanding the vulnerabilities of artificial intelligence (AI) to online misinformation is “an important and timely problem to address,” she said.
“These systems have complex architectures and are designed to learn from whatever information is on the internet. An adversary might try to poison these systems with a collection of adversarial or false information,” Nilizadeh said. “The system will learn the adversarial information in the same way it learns truthful information. The adversary can also use some system vulnerabilities to generate malicious content. We first need to understand the vulnerabilities of these systems to develop detection and prevention techniques that improve their resilience to these attacks.”
The CAREER Award is the NSF’s most prestigious honor for junior faculty. Recipients are outstanding researchers but are also expected to be outstanding teachers through research, educational excellence and integrating education and research at their home institutions.
Nilizadeh’s research will include a comprehensive look at the types of attacks that NLG systems are susceptible to and the creation of AI-based optimization methods to examine the systems against different attack models. She also will explore an in-depth analysis and characterization of vulnerabilities that lead to attacks and develop defensive methods to protect NLG systems.
The work will focus on two common natural language generation techniques: summarization, and question-answering. In summarization, the AI is given a list of articles and asked to summarize their content. In question answering, the system is given a document, finds answers to questions in that document and generates text answers.
Hong Jiang, chair of the Department of Computer Science and Engineering, underscored the importance of Nilizadeh’s research.
“With large language models and text-generation systems revolutionizing how we interact with machines and enabling the development of novel applications for health care, robotics and beyond, serious concerns emerge about how these powerful systems may be misused, manipulated or cause privacy leakages and security threats,” Jiang said. “It is threats like these that Dr. Nilizadeh’s CAREER Award seeks to defend against by exploring novel methods for enhancing the robustness of such systems so that misuses can be detected and mitigated, and end-users can trust and explain the outcomes generated by the systems.”
What's Your Reaction?