News

Industries

Companies

Jobs

Events

People

Video

Audio

Galleries

My Biz

Submit content

My Account

Advertise

AI in Advertising

An ACA Content Feature Initiative

AI in Advertising: How AI regulation is shaping the future of advertising and its impact

Welcome to the September AI in Advertising digest on Bizcommunity, brought to you by the Association for Communication and Advertising (ACA) and the ACA's Future Industry* group, a think tank grappling with this coming wave of change.
Source: © 123rf  This month AI in Advertising  examines a key topic in AI: the state of regulation and all the legal issues that this topic raises
Source: © 123rf 123rf This month AI in Advertising examines a key topic in AI: the state of regulation and all the legal issues that this topic raises

This month we turn our attention to a key topic in AI, the state of regulation and all the legal issues that this topic raises. A fair summary is that most of the law isn't settled, and many countries have yet to pass significant legislation to govern AI.

South Africa is gradually embarking on its own AI regulation journey and Musa Kalenga, who has been preparing a submission on behalf of the ACA to the national government, will talk about this.

For most agencies and clients, the major legal and risk worries are:

  • How secure are AI platforms? If I create content or share data on these platforms, will it be protected?

  • Who owns the copyright on AI-generated content? If I make an image on Midjourney or write some copy on ChatGPT, do I own that or does the tech provider?

  • Does the generated content infringe the copyright of the content owners on which the model was trained?

  • Can you trust AI? Is it going to talk nonsense (what the kids call "hallucinating") and feed you false information?

  • Is there an ethical dilemma in passing work to AI which will put humans out of work?

  • Can AI be used to violate human rights, for example by generating and disseminating misinformation or hate speech?

  • Is AI dangerous – is it getting so smart that it could go rogue and cause real damage?

The short answer to all of these is that there is no clear answer yet.

There are so many laws and lawsuits still in draft form and the landscape is constantly changing.

But hopefully, in this digest, we will provide the latest updates to enable you to make smart choices as to when and how you engage with AI.

In the words of Stephen Hollis, our legal advisor at the ACA, " The current state of play in South Africa is that AI regulation is not yet in existence – it is very much an ‘AI Wild West’ where AI systems are being unleashed without any real impact assessments being done on where government should look to set up some guardrails to ensure that rightsholders are not prejudiced by tech firms.”

With that said, let's take a closer look.


Unlocking AI's future in marketing and communications: A policy vision for South Africa by Musa Kalenga
Regulatory round-up
Lawsuits round-up
What does it all mean for companies and users?

Unlocking AI's future in marketing and communications: A policy vision for South Africa by Musa Kalenga

In a bold move towards digital transformation, South Africa's Department of Communications and Digital Technologies recently held an AI planning session, emphasising the need for a strategic policy framework to harness AI's potential in marketing and communications.

The framework must address AI’s ethical, social, and economic implications while driving innovation and ensuring equitable benefits.

AI is set to revolutionise marketing by transforming how businesses engage with consumers.

However, to unlock AI's full potential, a supportive government policy is essential. Here's how AI can reshape the industry and the policy interventions required to make this transformation effective.

Enhanced targeting and personalisation: Data-driven marketing

AI excels at targeting and personalisation, enabling marketers to deliver tailored messages based on deep data insights. However, the success of AI-driven marketing depends on ethical access to large datasets.

For AI to thrive in South Africa, policies must support the ethical collection, storage, and use of consumer data.The National AI Policy Framework must focus on data governance, ensuring that marketers can utilise data responsibly while protecting consumer privacy.

By promoting data anonymisation and secure storage technologies, the government can create a regulatory environment that supports AI in marketing while safeguarding consumer rights.

Boosting efficiency and productivity: AI-driven innovation

AI enhances efficiency by automating tasks such as data analysis and campaign management, freeing marketers to focus on strategy and creativity.

To harness these benefits, government policies should promote widespread adoption of AI technologies. Investment in AI infrastructure, including high-speed internet and AI-specific tools, will be crucial.

The government can incentivise businesses, particularly SMEs, with tax breaks or grants to adopt AI-driven marketing tools, helping them compete on a global scale.

New business models and revenue streams

AI is not just improving current practices but opening doors to new business models like AI-powered recommendation engines and predictive analytics.

To fully capitalise on these opportunities, South Africa needs a policy environment that encourages AI innovation and commercialisation.

The National AI Policy should prioritise funding for research and development (R&D) and establish AI innovation hubs.

These initiatives would support startups and facilitate access to venture capital, enabling companies to innovate and develop cutting-edge AI solutions.

Impact on the Labour Market

The adoption of AI in marketing will significantly impact South Africa’s labour market.

While AI will automate tasks like content creation and data analysis, it will also create new job roles in AI tool management, data interpretation, and AI system oversight.

Workers will need to reskill and upskill, acquiring competencies in AI technologies to remain relevant.

To support this shift, the government should implement AI-focused education and training programmes and encourage public-private partnerships that foster the development of AI talent.

Offering incentives for reskilling will help businesses prepare their employees for AI’s transformative role in marketing.

Addressing the ethical dilemma: Responsible AI

As AI becomes integrated into marketing, concerns about data privacy and algorithmic bias are inevitable.

To mitigate these risks, the government must establish a robust ethical framework focused on transparency, accountability, and fairness.

An independent oversight body should be created to monitor AI practices, ensuring that AI systems are ethical and consumer trust is maintained.

The Way Forward

South Africa’s AI planning session highlights the urgent need for strategic policies that prioritise data governance, ethical AI use, and innovation. With the right framework, AI will drive marketing transformation, economic growth, and job creation, positioning South Africa as a global leader in AI-driven marketing.

Regulatory round-up

Here are a few updates on the major legislative movements in this space.

  • California has passed a new bill (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) which imposes a set of safety measures on AI companies and could radically change how AI operates given that many AI companies are based in California.

    The law, among other things, insists that AI models can be shut down if they go rogue, limits the kind of training the model can be exposed to and establishes testing procedures to try and assess potential harm the AI could perform.

    This has led to a chorus of complaints by tech companies who argue that it will limit innovation and that it's too early in the evolution of AI to be taking such a heavy hand.

    Most of the provisions won't be evident to users of AI tools although it could make new features slower to arrive and impede progress toward the goal of creating an "artificial general intelligence".

  • In the European Union the AI Act has come into effect and seeks to "build trust in AI" according to Curtis Wilson of the Software Integrity Group.

    According to ChatGPT the Act "focuses on ensuring transparency, accountability, and safety, with stricter requirements for high-risk AI systems, such as those used in critical sectors like healthcare, finance, and law enforcement".

    Again, this kind of law is more about governing how AI models are created and managed and an attempt to limit any harm they may do by being deployed by unscrupulous (or careless) developers.

  • South Africa, as stated earlier, currently has no regulation governing AI.

    The Communications Minister has drafted a National AI Plan on which public submissions are being taken but as ever, the wheels move slowly around here. Some existing legislation like PoPIA and the Copyright Bill apply but already seem wildly out of date given the new developments (in fact the new Copyright Amendment Bill thoroughly misses the AI topic in its entirety).

If you'd like to do a deep dive into the state of AI regulation worldwide we found this resource helpful in going country by country through the current regulatory environment: Global Regulatory Tracker.

Lawsuits round-up

There are a ton of pending lawsuits against AI companies, mostly by copyright owners up in arms about how AI models were trained.

For a full rundown visit this page.

Here are just some key ones.

  1. Large Language Models

    • Alter vs. OpenAI, Tremblay vs. OpenAI and others are a set of lawsuits brought by a range of authors and authors’ associations against Microsoft and OpenAI alleging copyright infringement in the training of their AI models.

      It is a fact that OpenAI (and others) consumed large numbers of books and publications in training their models, without remunerating the authors.

      The open question is whether utilising copyright material for training purposes amounts to copyright infringement. The most likely outcome here is some kind of financial settlement and a burden to license training materials in future.

    • New York Times vs. Open AI is a separate case where the NYT is claiming OpenAI unfairly trained its models on NYT content. OpenAI is arguing that the situation is similar to the Google Books lawsuit where it was found that despite having ingested all known books, Google "transformed" the content rather than using it verbatim and it was thus fair use.
    There are important differences between these cases, however, in that Google Books was ultimately deemed to help authors and AI tools arguably compete with them.
  2. Generative Art

    • Zhang, Anderson, Larson, Fink vs. Google is a lawsuit filed by these visual artists against Google, claiming (similar to NYT case) that Google's Imagen AI was trained on their copyright material without their permission.

      These artists have filed a similar suit against Stability AI.

    • Getty Images vs Stability AI is a suit where Getty claims StabilityAI was trained on 12 million photographs they own infringing their copyright.

      A novel feature of this case is that Stability AI can actually generate images with a Getty watermark, a useless but compelling demonstration of the point Getty is trying to make. This case is very much ongoing.

This resource is a good place to keep tabs on the various landmark legal cases underway.

What does it all mean for companies and users?

Most regulation – in draft or passed – is aimed at limiting AI's ability to go rogue and do harm.

This impacts the tech companies but has a limited impact (positive or negative) on end users.

Users are understandably concerned that if they generate content using GPT or Midjourney etc. they are inadvertently infringing copyright.

The AI companies have so far taken a fairly brazen attitude to the problem with Google outright committing to assume responsibility for all copyright-related risks associated with using their tools.

It's hard to see how these lawsuits put the genie back in the bottle – the models are trained and are now getting smarter with usage.

A win for copyright owners here could force platforms to declare their training sources but most likely some money is going to change hands and things will move ahead as there is no way of now excluding specific copyright material from the training data set.

As a user – or company – right now the truth is there is no established law apart from outdated and largely inapplicable copyright law to go by.

That said if you logic it out, you will quickly realise that these claims are against a tiny fragment of the data used to train full AI models and any copyright holder faces impossible odds in claiming your image or block of text used their material specifically.

Thus, the only feasible claim is against the tech platform itself – as is evident from who the defendants are in these cases.

Stephen Hollis of law firm Adams & Adams warns, "The risk to advertisers and agencies posed by AI include that their copyright protected materials (anything that could be mined online by AI systems), including already produced and flighted advertisements, and other marketing materials could be re-purposed and re-packaged by their competitors and even clients themselves by using generative AI systems... without any recognition or remuneration flowing back to the initial content originators."

The solution to Hollis' concern may be to use your own instances of AI models trained on specific content and generating material not released into the wild or incorporated into a public platform.

Since the models come pre-trained this doesn't deal with the material used in their initial training but does seem to deal with many other concerns. And it's easier than you think to do this as many AI systems are open source.

This is a new and developing space and there will be – probably next year – some precedent we can base our decision-making on.

Governments will eventually weigh in with laws and regulations but don't expect that to happen soon.

In the meantime, you are either taking advantage of these technologies, risks aside, or you are running a potentially greater risk in falling behind those who are.

About the ACA

We are the official industry body for advertising agencies and professionals in South Africa, counting most major agencies among our members.

Find out more about the ACA.

*ACA Future Industry committee comprises Jarred Cinman, Vincent Maher, Musa Kalenga, Haydn Townsend, Matthew Arnold and Antonio Petra.

About Jarred Cinman

Jarred Cinman authored this article as an ACA board member and a member of the ACASA Future Industry committee, including Vincent Maher, Musa Kalenga, Haydn Townsend, Matthew Arnold and Antonio Petra.
Let's do Biz