Multipolar World

Current AI risks

Jorge Costa Oliveira

Beyond the future risks associated with general artificial intelligence (AI) and superintelligent AIs, there are current threats that are seldom discussed — yet deserve to be.

On the one hand, there are security risks leading to unauthorized access to confidential data and manipulation of large language models (LLMs), arising from AI: (i) data poisoning; (ii) model inversion; (iii) adversarial examples; (iv) prompt injection; (v) model stealing; (vi) supply chain attacks; (vii) jailbreaks and misuse; (viii) compliance and trust risks; (ix) AI-enhanced cyberattacks; (x) insecure AI code recommendations; (xi) delayed security patches; (xii) inadequate input validation.

Also, AI companies have been accused of violating intellectual property rights, starting with the widespread unauthorized use of data by AI companies. Moreover, major AI firms have been accused of copyright infringement; AI models are being trained on copyrighted materials without permission or compensation, which has sparked lawsuits (see the lawsuits filed by the Toronto Star and the Canadian Broadcasting Corporation, as well as GEMA’s landmark ruling convicting OpenAI’s ChatGPT for using song lyrics in Germany).

Nonetheless, other legal challenges persist: (i) liability for data leakage inadvertently disclosing sensitive information and breaching data protection laws; (ii) questions about authenticity and potential copyright infringement in AI-generated content; (iii) the perpetuation of biases leading to discriminatory outcomes; (iv) determining liability when AI’s autonomous decision-making is involved; (v) regulatory compliance. Also, current laws do not explicitly address ownership and authorship of AI-generated content, including whether an AI agent can be considered an author.

The lack of unified global regulations for AI creates uncertainty and challenges for businesses. While international regulation is not yet adopted, the main economic blocs should urgently adopt public policies to ensure legal compliance by companies developing AIs, particularly with regard to respect for intellectual property rights. Also, users of generative AI chatbots (such as ChatGPT) should be made aware of alternative platforms that emphasize ethics, transparency, and consent (e.g., Bloom).

Another pressing AI-related risk is the significant increase in carbon emissions generated by AI companies, due to the enormous energy consumption required to power data centers and AI systems — about 2% of global energy use and between 2.5% and 3.7% of worldwide greenhouse gas (GHG) emissions, most of which still come from burning fossil fuels.

According to Meta AI, training a single AI model can emit up to 284 tons of CO2. It also notes that Microsoft’s GHG emissions have risen by 30% since 2020, while Google’s increased by 50% in 2023 compared with 2019 — both linked to AI development. Curiously, Meta omitted its own data. The annual electricity demand from NVIDIA’s AI servers worldwide is expected to rise from 85.4 to 134 terawatt-hours by 2027.

AI also has other significant environmental impacts, including electronic waste generation, air pollution, and heavy water consumption.

To be fair, companies such as Google, Amazon, and Meta have declared they are seeking alternatives to minimize environmental impact and reduce their carbon footprints — by improving data center energy efficiency and increasing the use of renewable and nuclear energy sources. Over time, AI itself may also help reduce GHG emissions, for instance by optimizing industrial processes and predicting climate patterns.

linkedin.com/in/jorgecostaoliveira

Categories Multipolar World Opinion