Recent advances in AI have the potential to revolutionize the world. From the creation of spectacular art, to the composition of poetry, to the design of novel proteins and materials, AI is poised to improve the lives of billions on our planet.
Yet, these giant leaps in AI come with risks. Malicious threat actors and rogue nation states will seek to use these AI advances for a host of malign purposes such as creating more engaging phishing lures, automatically generating hard to detect malware, creating designer pathogens and explosives, stealing intellectual property and data, and worse. The Countering AI Proliferation (CAP) project seeks to help companies involved in AI navigate the choppy waters separating AI for Good from AI for Bad. How can companies involved in generating AI assets:
- Protect their training data?
- Protect their AI models?
- Protect relevant documents and trade secrets?
- Protect their intellectual property?
- Deter misuse of their technology?
The CAP Project led by the Northwestern Security & AI Lab (NSAIL) is a partnership between Northwestern University (PI: V.S. Subrahmanian), Georgetown University’s Security Studies Program (PI: Dan Byman) and the Netherlands Defence Academy (PI: Roy Lindelauf and Herwin Meerveld). Technology alone will not solve these problems. Nor will people alone. We propose the novel, multidisciplinary concept of the Counterproliferation Triad or CoT consisting of People, Processes, and Technology working together as a holistic approach to addressing these questions.
Project Website: https://sites.northwestern.edu/evcap/