Generative AI in retail: Opportunities, Risks and Regulation in Australia

Author: Shariqa Mestroni, James Hoy and Jessica Laverty, Bird & Bird

Like any disruptive technology, AI brings both opportunities and challenges for individuals and businesses in the retail industry.

As AI continues to integrate into the retail sector, businesses are leveraging it in various ways including in:

  • Product Development And Innovation: generative AI assists in writing briefs for product collections and generates mood boards and visual imagery.
  • Supply Chain and Logistics: Real-time demand forecasting is augmented using AI.
  • Marketing: AI enables hyper-personalization for loyalty programs.
  • Customized User Experiences: Online shopping assistance is personalized.
  • Descriptive Writing: AI generates descriptions for customer sites.
  • Trend Forecasting: AI helps predict trends.
  • Marketing Projects: AI contributes to creating marketing campaigns.
  • Product and Service Design: AI aids in designing products and services.
  • Stock Management and Supply Chains: Overhauling stock management and supply chains is facilitated.
  • Store Operations and Customer Support: AI assists with store operations and improves customer support.

However, these opportunities must be balanced against risks such as copyright infringement and data protection concerns.

AI Regulation in Australia:

In Australia, AI (including generative AI) is regulated by existing legislation covering consumer rights, data protection, competition, and copyright. Although there is no specific AI legislation, the Australian Government encourages responsible AI deployment. Recent steps include:

  • Encouraging organizations to follow Australia’s voluntary AI Ethics Framework.
  • Allocating $39.9 million over five years for AI technology development.
  • Establishing a Select Committee on Adopting Artificial Intelligence to explore AI impacts.
  • Publishing an interim response to the consultation on “Supporting Responsible AI” in January 2024.

While Australia’s regulatory environment evolves, responsible AI practices remain a priority.

IP Implications of Generative AI:

Generative AI tools raise questions about traditional IP rights. By way of example, AI-generated inventions may not qualify for patent protection due to the absence of a human inventor. AI-generated works face similar challenges, as copyright typically requires human authorship.

Content creators worry about misuse of their material by generative AI tools. Retail businesses using AI models must consider the legitimacy of training data.

While there are presently no Australian court cases involve IP infringement by AI systems, overseas cases provide some insights into key issues:

  • Lawsuits have arisen over unauthorized reproduction of images, text, and metadata in AI training datasets.
  • AI-generated outputs and derivative works have led to copyright infringement claims. Ensuring that the AI training data is free from unlicensed content is crucial to avoid potential infringement claims.
  • Some AI companies argue “fair use” exceptions, but Australia lacks a broad “fair use” defence. Instead, “fair dealing” applies for research, study, and criticism.
  • Properly attributing the author of the work is essential. AI-generated content should ideally include information about its origin and the sources used in training.
  • In the context of employment, copyright works created by employees are typically owned by the employer in Australia. However, if an AI tool used by an employee incorporates third-party copyrighted material in its training data, the employer could face exposure to infringement claims. Employers should establish controls around employee use of AI tools and carefully review indemnity clauses in AI tool agreements to manage potential IP risks.

Businesses using generative AI need to be diligent about compliance with intellectual property laws, ensure “clean” training data, and develop ways to demonstrate the provenance of generated content. As AI continues to evolve, legal frameworks will need to adapt to address these complex issues. Employers should stay informed and take proactive steps to mitigate risks associated with AI-generated content and copyright infringement.

Data protection

The rapid advancement of generative AI has brought forth new privacy challenges, particularly concerning ChatGPT, and privacy regulators worldwide have taken a keen interest. In March 2023, the Italian data protection authority took significant action by temporarily banning ChatGPT in Italy. They initiated an investigation into OpenAI’s privacy practices, highlighting several concerns:

  • Data Breach: A breach affecting ChatGPT’s user conversations and payment information.
  • Lack of Information: Users and data subjects weren’t adequately informed about the collection of their personal data.
  • Legal Basis: OpenAI lacked a clear legal basis for mass data collection and processing to train the algorithms.
  • Data Accuracy: Inaccuracies in the personal data processed by the platform.
  • Age Verification: The absence of an age verification mechanism.

Subsequently, the ban was lifted in April 2023, subject to specific conditions set by the Italian authority. OpenAI committed to implementing measures to safeguard individual privacy. However, ChatGPT remains under scrutiny. Privacy regulators across Europe and beyond continue to investigate, with the European Data Protection Board establishing a dedicated task force to address these concerns.

When assessing the use of a generative AI system in the context of the Australian Privacy Principles (APPs), there are several key considerations to keep in mind:

  • Transparency: Check whether any collection, use, and disclosure of personal information by the generative AI system is clearly disclosed in your organization’s privacy policy. Transparency ensures that individuals are aware of how their data is being used.
  • Collection: Evaluate whether the collection of personal information by the generative AI system is reasonably necessary for your organization’s functions or activities. Consider whether sensitive information is collected, and if so, whether individuals have provided consent. Ensure that the collection of personal information is lawful, fair, and directly from the individuals themselves (unless impracticable).
  • Use and Disclosure: Examine whether the generative AI system uses or discloses personal information for secondary purposes. Obtain consent if needed or ensure that individuals would reasonably expect such use or disclosure. Comply with APP 7 if the system involves direct marketing. Address cross-border disclosure of personal information and ensure compliance with the APPs.
  • Integrity: Take reasonable steps to ensure that personal information handled by the generative AI system is accurate, up-to-date, complete, and relevant. Safeguard the security of personal information stored in the system and follow proper procedures for destruction or de-identification.
  • Individual Rights: Establish a mechanism for handling requests by individuals for access to or correction of personal information held in the AI system. Respect individual rights and privacy.

We are yet to see any case law in Australia, or Privacy Commissioner investigations, concerning the application of the APPs to ChatGPT or other generative AI systems. However, decisions by the Privacy Commissioner, and the Administrative Appeals Tribunal (AAT) are potentially instructive as they have involved consideration of privacy law issues arising in relation to AI systems generally in the context of facial recognition tools which involve the use of machine learning algorithms:

  • in late 2021, the Privacy Commissioner found that 7-Eleven had breached the privacy of its customers by collecting biometric information through a facial recognition tool; and
  • in May 2023 on appeal from a decision of the Privacy Commissioner, the AAT found that Clearview AI had collected sensitive information about individuals without consent and had not taken reasonable steps to implement practices, procedures and systems to ensure compliance with the APPs.

Moreover, automated decision making is a topic that is being addressed as part of the Privacy Act Review and the Australian Government has agreed to proposals that:

  • privacy policies should set out the types of personal information that will be used in substantially automated decisions which have a legal or similarly significant effect on an individual’s rights (Proposal 19.1);
  • high level indicators of the types of decisions with a legal or similarly significant effect on an individual’s rights should be included in the Privacy Act and this should be supplemented by OAIC guidance (Proposal 19.2); and
  • a right for individuals to request meaningful information about how substantially automated decisions with legal or similarly significant effect are made should be introduced and entities should be required to include information in privacy policies about the use of personal information to make substantially automated decisions with legal or similarly significant effect (Proposal 19.3).

The Australian Government has also agreed ‘in-principle’ to a proposal that there be a requirement to provide individuals with information about targeting, including clear information about the use of algorithms and profiling to recommend content (Proposal 20.9).

Key takeaways

  1. Businesses and employers should protect themselves by establishing robust AI policies for their employees with clear prohibitions and guardrails.Employees should have regular training around the use of AI tools in line with that policy.
  2. When adopting AI tools for use in business, carefully reviewing the terms and conditions of the AI tool to be aware of, and mitigate, any legal risks. You may be interested in reading our article about Generative AI and Machine Learning Contracts.
  3. When dealing with suppliers, ensure that there are sufficient AI warranties and indemnity clauses in contracts. Before engaging new suppliers, conducting sufficient due diligence around their policies.
  4. Ensuring good record-keeping – the use of AI should be documented, including in relation to the natural persons involved and the AI prompts.
  5. Businesses should ensure that any generative AI tools adopted comply with the applicable data protection laws in Australia including in relation to transparency, collection, use and disclosure, integrity and individual rights.

Register today for Shariqa, James and Jessica’s session at the Global Sourcing Seminar Series on AI Risks and Opportunities