Unregulated AI Poses IP Loss, Infringement Risks and Director Liability, Experts Warn
HARARE – The unregulated use of artificial intelligence without clear policy or legal oversight exposes companies, academic institutions, and their leadership to significant risks including loss of intellectual property ownership, infringement liability, reputational damage, and regulatory sanctions, a new briefing on AI governance has warned.
“If you aren’t careful, you could lose ownership of your own ideas,” the briefing states. “When you use generative AI, the things you create might not be protected by copyright or patents under current laws. Without the right safeguards, you risk your work becoming unprotectable or even claimed by someone else entirely.”
On infringement liability, the document cautions that AI tools may accidentally reproduce material protected by copyright, trademark, or patent law, sometimes without making that clear. “If your business relies on or shares these outputs, you could unexpectedly find yourself facing infringement claims.”
Reputational damage is another major concern. “When AI-generated content goes wrong, whether it’s riddled with mistakes, lifts work from others, or reflects hidden biases, the damage isn’t just technical. It can erode the trust and credibility you’ve worked hard to build with your clients, colleagues, and the wider public.”
The briefing adds that protecting reputation requires careful consideration of AI use at every step. “Protecting your reputation means thinking carefully about how you use AI, every step of the way.”
On regulatory compliance, the document notes that as laws around AI continue to change, more regions are rolling out rules designed specifically for AI. “Not following these new requirements could lead to penalties, audits, or even limits on how you operate.”
In academic and research settings, the unsupervised application of AI poses integrity risks. “Unsupervised application of artificial intelligence in coursework, thesis preparation, or research activities poses risks to academic integrity and the originality of intellectual property for universities and research institutions.”
The briefing summarises that unregulated AI use exposes businesses and directors to four key dangers: IP loss and forfeiture, unintended IP infringement, reputational damage from public misuse, and non-compliance with global data and AI laws. “Boards and executives face growing liability,” it states. “AI must be governed through clear, proactive legal strategy.”
Several global and local case studies illustrate the emerging legal battles. These include South Africa granting a patent to DABUS, sparking the AI inventorship debate; publishers suing OpenAI in India over training data copyright risks; and Alcon suing Tesla over Blade Runner imagery in a generative AI copyright clash. The EUIPO report on generative AI and policy options for AI and IP is also cited.
The briefing further raises the question: “Intellectual Property – Can a machine powered by AI be an inventor?” with reference to African perspectives on AI and IP. The clear conclusion is that unregulated AI use is both powerful and dangerous when unmanaged, demanding immediate attention from corporate boards, legal teams, and academic institutions alike.
Francis