The senior United Nations (UN) staffer tasked with communications has identified four areas – one of which is peace and security – where “generative artificial intelligence (AI)” is a concern.
Under-Secretary-General for Global Communications Melissa Fleming told an informal gathering of some UN Security Council (UNSC) members and invited persons “generative AI” leaves few fingerprints. This, a report has it, makes it “harder” for journalists, fact-checkers, law enforcement or ordinary people to detect whether content is real or AI generated.
On peace and security, the report notes AI-powered disinformation is “already endangering UN peace and humanitarian operations, putting staff and civilians at risk”. Over 70% of UN peacekeepers responding to a recent survey said mis- and disinformation “severely hampered” their ability to carry out their work.
Last August the UN reported its then four African peacekeeping missions – MONUSCO in the Democratic Republic of Congo (DRC), MINUSCA in the Central African Republic (CAR), MINUSMA in Mali (until the mission closes at year-end) and UNMISS in South Sudan – were actively preventing disinformation campaigns aimed at undermining mission credibility.
The initiative is part of fighting back against falsehoods that trigger tensions, violence or even death, the world body notes, adding it is monitoring how mis- and disinformation and hate speech can attack health, security, stability as well as progress toward the Sustainable Development Goals (SDGs).
It saw the UN African missions use smartphones and editing apps along with innovative approaches to build a “digital army” aimed at combating mis- and disinformation on social media networks and beyond. UN missions in Africa and elsewhere report disinformation, in the case of MONUSCO as far back as 2019, via social media campaigns targeting peacekeeping work.
“There is a war going on through social media, radio and traditional news outlets,” MONUSCO head Bintou Keita said. “Fighting deadly disinformation has been a painful curve to learn of this new battlefield, but the mission is now proactive on social and other media platforms to help stop its spread.”
On human rights violations, AI is reportedly used to create and spread harmful content, including child sexual abuse material and non-consensual pornographic images, targeting women and girls. “The UN is also concerned that anti-Semitic, Islamophobic, racist and xenophobic content could be supercharged by generative AI”.
AI has the potential to manipulate voters and sway public opinion before and during elections. This, the report has it, “poses a significant threat to democratic processes around the world”.
AI tools can undermine science and public institutions. Climate action is given as an example where “escalating decades-long disinformation to derail it by amplifying false information”.