This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum, The Institute, or IEEE.
Many in the civilian artificial intelligence community don’t seem to realize that today’s AI innovations could have serious consequences for international peace and security. Yet AI practitioners—whether researchers, engineers, product developers, or industry managers—can play critical roles in mitigating risks through the decisions they make throughout the life cycle of AI technologies.
There are a host of ways by which civilian advances of AI could threaten peace and security. Some are direct, such as the use of AI-powered chatbots to create disinformation for political-influence operations. Large language models also can be used to create code for cyberattacks and to facilitate the development and production of biological weapons.
Other ways are more indirect. AI companies’ decisions about whether to make their software open-source and in which conditions, for example, have geopolitical implications. Such decisions determine how states or nonstate actors access critical technology, which they might use to develop military AI applications, potentially including autonomous weapons systems.
AI companies and researchers must become more aware of the challenges, and of their capacity to do something about them.
Change needs to start with AI practitioners’ education and career development. Technically, there are many options in the responsible innovation toolbox that AI researchers could use to identify and mitigate the risks their work presents. They must be given opportunities to learn about such options including IEEE 7010: Recommended Practice for Assessing the Impact of Autonomous and Intelligent Systems on Human Well-being, IEEE 7007-2021: Ontological Standard for Ethically Driven Robotics and Automation Systems, and the National Institute of Standards and Technology’s AI Risk Management Framework.
If education programs provide foundational knowledge about the societal impact of technology and the way technology governance works, AI practitioners will be better empowered to innovate responsibly and be meaningful designers and implementers of regulations.
What Needs to Change in AI Education
Responsible AI requires a spectrum of capabilities that are typically not covered in AI education. AI should no longer be treated as a pure STEM discipline but rather a transdisciplinary one that requires technical knowledge, yes, but also insights from the social sciences and humanities. There should be mandatory courses on the societal impact of technology and responsible innovation, as well as specific training on AI ethics and governance.
Those subjects should be part of the core curriculum at both the undergraduate and graduate levels at all universities that offer AI degrees.
If education programs provide foundational knowledge about the societal impact of technology and the way technology…
Read full article: AI Missteps Could Unravel Global Peace and Security
The post “AI Missteps Could Unravel Global Peace and Security” by Raja Chatila was published on 07/21/2024 by spectrum.ieee.org
Leave a Reply