Previous Content

20 Essential Questions for AI Risk and Procurement Assessments

20 Essential Questions for AI Risk and Procurement Assessments

As we gear up for the Responsible AI Summit, Oliver Patel, Enterprise AI Governance Lead at AstraZeneca and speaker at #ResponsibleAISummit, has put together two must-have cheat sheets:

  • 10 Key Questions for AI Risk Assessments
  • 10 Key Questions for AI Procurement Assessments

These resources are designed to guide you in making informed, ethical decisions in your AI journey. Don’t miss out—download them today and ensure your AI strategies are aligned with industry best practices.

[Report] Driving Responsible AI in the Enterprise Sector

[Report] Driving Responsible AI in the Enterprise Sector

In this report, we will explore what responsible AI looks like today, why it’s so important, and the challenges it presents for enterprises.

A-Step-by-Step Guide: How to Mitigate the Risks of Using Generative AI

A-Step-by-Step Guide: How to Mitigate the Risks of Using Generative AI

Generative AI is already demonstrating huge potential to drive growth and increase engagement with customers. Early applications such as creating hard hitting content on the fly, hyper personalisation, and streamlining complex tasks, have caught the imaginations of business leaders, who are rushing to understand how they can best leverage the technology and reap its rewards. But, with great power comes great responsibility. While Generative AI is shaping up to be the next big-ticket driver of productivity and creativity, it comes with several risks that need to be managed, to protect businesses and their customers from harm.

Common Generative AI risks include:

  • Inaccuracy: Generating misinformation or inaccurate content
  • Data bias: Generating harmful outputs that are biased or discriminatory.
  • Cyber security: Generative AI models could accidentally access sensitive customer and business data
  • Lack of control: One potential challenge with Generative AI outputs is identifying potential issues, like the utilisation of copyrighted material.
  • Insufficient user training: Employees may accidentally expose sensitive and business enterprise data when using Generative AI.

In this guide, we will take you through a step-by-step approach on how to mitigate the risks of using Generative AI for your business and explain what measures you can put in place to ensure safe and successful use of Generative AI.

[Insider Guide] Funding & Getting Responsible AI Buy-In

[Insider Guide] Funding & Getting Responsible AI Buy-In

Struggling to Secure Responsible AI Buy-In? 

Crafting a compelling business case for AI that emphasizes responsibility and ethics is crucial for organizations aiming to harness the transformative potential of these technologies. As of the close of 2023, the global AI market reached $196.63 billion, marking a substantial $60 billion increase since 2022, driven significantly by expanding practical applications. Given this substantial investment, embedding responsibility into business strategies becomes paramount.

This guide delves into:

  • Emphasizing ethical and compliant AI usage
  • Highlighting ethical use cases
  • Formulating business cases centered on responsibility

Featuring insights from AI experts at Adecco, BBC, and UBS, it provides essential guidance for navigating the ethical landscape of AI deployment effectively.

Get your complimentary copy now, and learn how to secure funding and buy-in for Responsible AI Implementation >>>

Establishing & Maximising an AI Ethics Counsel within your organisation

Establishing & Maximising an AI Ethics Counsel within your organisation

In this article, we will explore what a good AI ethics counsel looks like, and how you can implement one in your organisation to help guide your AI journey.

A Look Back: The Responsible AI Summit 2024

A Look Back: The Responsible AI Summit 2024

Join us in reflecting on the impactful discussions and insights shared at the Responsible AI Summit in London, UK, in 2024. This summit convened a diverse group of experts to address challenges in operationalization, scaling, assessing use cases, regulatory compliance, and the responsible transformation of AI.

For information on who was in attendance, what companies were in attendance and our speaker-line up, download our look back now >>

[E-Book] Through the enterprise lens: Dissecting the current regulation of generative AI

[E-Book] Through the enterprise lens: Dissecting the current regulation of generative AI

[E-Book] Through the enterprise lens: Dissecting the current regulation of generative AI

Although we are still in the early stages of generative artificial intelligence (AI), its potential to drive growth and improve customer engagement is apparent.

Whether that is through creating compelling content in real-time, simplifying complex tasks or hyper-personalisation, the technology has caught the attention of business leaders worldwide with many eager to learn how they can harness its power, and with good reason. Research has suggested that a whopping 40% of all working hours can be impacted by generative AI.

But as generative AI technology becomes more widely available, governments and lawmakers are taking a more proactive role in its governance, with the aim of minimising risk and ensuring the safe usage of the technology. Because of this, it's crucial to stay up-to-date on the legal landscape, to help you avoid risk and adhere to guidelines.

In this E-Book, we will explore how different countries are approaching their regulation of the technology, while providing you with key steps to help stay informed and in control of your generative AI journey.

Join the only event dedicated to operationalising responsible AI in the enterprise: Responsible AI Summit 2024

The current regulatory landscape of generative AI

The usage of generative AI has raised several concerns that have prompted lawmakers to take action and start discussions around implementing regulations and guidelines for safe usage. The most common concerns of generative AI are:

  • Data security
  • Privacy
  • Copyright and fairness
  • Misinformation
  • Transparency

With these concerns in mind, what regulations are countries planning to implement, and what has already been put into action?

European Union

At a glance: Expected to finalise the landmark AI Act which will introduce the world's first AI regulations. This far reaching legislation aims to classify AI by levels of risk, while introducing strict penalties for breaking the law.

The European Union (EU) has been actively working on the AI Act for several years, making them by far the most advanced in terms of implementing AI regulations. It is expected to be completed by the end of 2023, which will likely be followed by a multi-year transition period before the laws are formally put into action.

What is the AI Act?

The AI Act is an upcoming set of regulations that aim to categorise AI according to different levels of risk. Its primary goal is to enforce stricter monitoring regulations on high-risk applications of AI, and outright banning AI technologies that have an unacceptable risk level.

Some of the unacceptable uses that the EU has identified include:

  • Cognitive behavioral manipulation of people or specific vulnerable groups
  • Social scoring and classifying people based on behavior
  • Real-time and remote biometric identification systems, such as facial recognition.

Fortunately, generative AI doesn’t fall into these categories. In fact, the first draft of the AI Act, published in 2021, did not specifically reference generative AI at all. However, this has since changed given the meteoric rise of large language model technologies throughout 2022 and 2023.

Amendments were proposed to the AI Act in June 2023 to give generative AI its own category, “General Purpose AI systems. This way, the technology wouldn't be constrained by the “high-risk” and “low-risk” categorisations that the AI Act applies to other forms of AI technologies. This categorisation recognises that generative AI can be applied to a wide range of tasks with numerous outcomes, and may produce unintended outputs. This is in stark contrast compared to an AI technology such as facial recognition which has a more clearly defined use case.

What kind of regulations can we expect to see that will impact generative AI technology?

The AI Act aims to introduce the following requirements for generative AI usage

  • Generative AI must clearly show that content was generated by AI.
  • Generative AI models must be prevented from generating illegal content.
  • Generative AI outputs must publish summaries of copyrighted data used for inputs.

It's important to keep in mind that the AI Act is subject to change, but here are a few observations in relation to the current draft:

  • Not high risk: Generative AI is not being classified as a high-risk use case.
  • Limited clarity on copyright: While the AI Act states that copyrighted input data should clearly be labeled, it does not provide clarity on remuneration for original creators, nor does it mention if AI generated outputs can be copyrighted.
  • Limited clarity over transparency and protective safeguards: According to the AI Act amendments, generative AI solutions must be developed by suppliers in a way that prevents their use for illegal purposes. Additionally, any outputs that are AI-generated must be clearly labeled as such. However, the specifics of these requirements have not yet been provided, but will likely be included in the final document.
  • Supplier vs user burden of risk: The AI Act focuses on the measures suppliers can take to minimise misuse. There is no mention at this time over the burden of risk that users hold.

United Kingdom

At a glance: An artificial intelligence whitepaper has been published that advocates supporting current regulators to oversee AI using their existing resources. The paper takes a “pro-innovation” stance regarding generative AI.

Published in March 2023, the UK government’s “AI regulation: a pro-innovation approach” outlined an agile framework to guide the development and use of AI, underpinned by 5 principles:

  • Safety, security and robustness,
  • Appropriate transparency and explainability
  • Fairness
  • Accountability and governance
  • Contestability and redress

The whitepaper mainly discusses the usage of artificial intelligence at a broad level, but references generative AI a number of times.

A few points to note: at present, there is no dedicated AI regulator in the UK and it seems that this situation will remain unchanged. Instead, the whitepaper states that existing regulators will use the guidelines provided in the whitepaper to monitor and regulate the use and growth of AI in their respective fields.

Second, the 5 guiding principles above will not be subject to specific laws. The whitepaper states that “new rigid and onerous legislative requirements on businesses could hold back AI innovation and reduce our ability to respond quickly and in a proportionate way to future technological advances.” It is clear from this that the UK is taking a more off-hands approach to regulating AI technologies.

However, it is important to note that the whitepaper suggests a statutory duty may be established in the future, subject to an evaluation of how well regulators uphold the guidance principles.

What kind of regulations can we expect to see that will impact generative AI technology?


On generative AI, the whitepaper highlights its benefits to society such as its potential in the field of medicine, and otherwise, its potential to grow the economy.

In terms of regulation, the Whitepaper states it was “taking forward” the proposals outlined by the Government Chief Scientific Adviser (GCSA), most notably on the subject of intellectual property law and generative AI.

Also taking a “pro-innovation” stance, the GCSA’s recommendation was to enable the mining of data inputs and utilise existing protections of copyright and IP law on the output, helping to simplify the process of using generative AI while providing clear guidelines for users.

The GCSA also suggested that in accordance with international standards, AI-generated content should be labeled with a watermark showing that it was generated by AI.

Both the GCSA’s recommendations and the whitepaper underscore the importance of a clear code of conduct for AI usage, but not impacting on creativity and productivity. The whitepaper states that “[the UK] will ensure we keep the right balance between protecting rights holders and our thriving creative industries, while supporting AI suppliers to access the data they need.”

The consultation period comes to a close September 2023, where regulators are expected to voice their opinions on the framework and how they plan to implement it, and whether any modifications will be recommended.

In the same vein, the Competition and Markets Authority (CMA) is currently conducting a review of AI foundational models, such as ChatGPT, with a focus on consumer protection. It's expected that this review will be released by the end 2023.

USA

At a glance: Progressing towards more comprehensive AI legislation. The latest development saw the voluntary commitment of the seven largest AI companies to establish minimum safety, security, and public trust guardrails.

The USA is generally considered to be lagging behind European counterparts in terms of governing the usage of AI. However, there have been many developments over the past few years signaling the intent of lawmakers to implement guidelines and legislation to promote safe usage of AI.

Of note, these include:

However, similar to the UK, these are guidance documents not upheld by specific laws. The first of which, published in October 2022, the “Blueprint for an AI Bill of Rights” outlined 5 principles;

  • Safe and Effective Systems
  • Algorithmic Discrimination Protections
  • Data Privacy
  • Notice and Explanation,
  • Alternative Options [opting out of generative AI usage]

In May 2023, the AI Risk Management Framework referenced the usage of generative AI, where it suggested that previous frameworks and existing laws are unable to “confront the challenging risks related to generative AI.” Based on this, it can be inferred that generative AI will likely become subject to legislation in the future.

What kind of regulations can we expect to see that will impact generative AI technology?

In July 2023 the Whitehouse announced that seven companies, engaged in the development of generative AI, voluntarily committed to managing the risks associated with the technology. The companies are Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. Of the commitments, the ones that stand to have the most effect on generative AI include:

  • commitment to publicly report AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use
  • prioritising research on the societal risks that AI systems can pose, including on avoiding harmful bias and discrimination, and protecting privacy
  • developing robust technical mechanisms to ensure that users know when content is AI generated, such as a watermarking system

It's also worth noting the precedent-setting Washington, D.C. federal court case of August 2023, which established that artwork created solely by artificial intelligence is not eligible for copyright protection unlike human-generated art. In which US District Judge Beryl stated: "We are approaching new frontiers in copyright as artists put AI in their toolbox," which will raise "challenging questions" for copyright law.

In summary, the USA is making significant strides towards ensuring safe usage of AI-related tools and providing guidance for organisations in their development and implementation. While these guidelines are not currently backed by any specific laws, they will likely serve as a foundation for future legislation.

Understanding regulations from an enterprise perspective

Regulations for AI and generative AI are still a few years away, as we have explored in this e-book. The EU's AI Act is the closest to being finalised, but lawmakers are still making amendments and examining developments in the technology closely. Even after the final draft of the AI Act is approved, it will undergo an implementation period, which could take several years more before any laws are put in place (possibly 2025).

In contrast, the USA and UK have opted for a decentralised approach. Both are currently in consultation periods, and their primary goal at this stage is to create industry guidelines for safe generative AI usage. Similarly, it is anticipated that regulations, if they come to fruition, are several years away.

However, as generative AI continues to advance, it's crucial for companies to have a well-defined strategy in place for AI ethics and compliance. While the USA and UK have not outlined clear penalties for breaches of guidelines, the EU AI Act proposes steep non-compliance penalties, where companies can be imposed with fines that can reach up to €30 million or 6% of global income!

No matter where you do business, it is vital to adhere to AI ethics guidelines to avoid potential penalties, and to remain agile so you can adapt to any future regulations.

Practical steps an organisation can take to adhere to generative AI guidelines

As demonstrated by the recent amendments to the EU’s AI Act to account for generative AI, the regulatory landscape seems to be constantly shifting. To better prepare for the future and to adhere to current generative AI guidelines, here are some practical steps you can take.

  • Start addressing concerns to compliance: Maintaining regulatory compliance, even if it's still a few years away, requires careful planning and resource allocation. To ensure that you are prepared to meet upcoming requirements, consider the regulatory frameworks that apply to your specific case for using generative AI, as well as any international guidelines.
  • Focus on the common thread for adhering to generative AI guidelines: The common thread between the EU, USA and UK, is to ensure generative AI tools:
    • Are able to add watermarks to clearly show an output is created with the help of generative AI
    • Are able to publish all copyrighted material that was used to generate AI output
    • Have measures in place to ensure that the technology can’t be used for illegal purposes
  • Ensure strong data governance: To adhere to the above bullet points, it is important to implement robust data governance processes. organisations must have a clear understanding of the data inputs used in the generative AI process and maintain a deliberate approach to data selection to ensure that public or confidential data is not used. Additionally, this will help quickly identify which data was utilised to produce an output and in case of any legal requirements, businesses can promptly publish this data without having to search through backlogs.
  • Stay up to date with generative AI advancements: Constant developments in generative AI show the importance of businesses being naturally curious about evolutions in the technology. It is crucial to understand the capabilities and limitations of available generative AI solutions as well as potential new risks that may be introduced.
  • Keep an eye on regulatory updates: Be prepared to stay nimble in the rapidly evolving regulatory landscape, whether it's the EU's AI act, the UK's AI Regulation Whitepaper, or the ongoing discourse in the USA. Keep an eye out for updates and emerging regulatory guidelines to remain informed and adaptable.
  • Implement a generative AI governance team or governance board: To ensure clear guidelines and safe implementation of generative AI across an organisation, it is necessary to establish a dedicated governance team. This team should communicate generative AI best practices to staff, educate them on potential risks, and provide guidance on how to adhere to regulatory guidelines. The team should also keep the organisation updated on any emerging government regulations at large.

Bringing together all those involved, including legal, IT, human resources, front line employees and management teams will help create robust policies for generative AI usage that prioritise security and ethics.

  • Establish your use cases: To ensure that generative AI is utilised effectively, it is essential to determine your specific use case and how it can address your business challenges. Ensure that the technology is used only in specific instances, avoiding objective creep and unintended generative ai usage. Communicating this to everyone involved in implementing generative AI will help ensure it is used for its intended purpose.

As generative AI becomes more prevalent, it is essential for companies to take accountability for its ethical use. By following the guidelines of current AI frameworks and implementing early safeguards, businesses can ensure the secure and safe use of generative AI while remaining agile to potential future regulations. The creation of an ethical framework, strong labeling and data tracking capabilities, and the establishment of an AI governance team can significantly contribute to this.

More Generative AI resources:

[Previous Speaker Interview] With Paul Dongha,Group Head of Data and AI Ethics at Lloyds Banking Group

[Previous Speaker Interview] With Paul Dongha,Group Head of Data and AI Ethics at Lloyds Banking Group

Paul Dongha, Group Head of Data and AI Ethics at Lloyds Banking Group, holds the responsibility for all processes and technologies, generating trustworthy and responsible outcomes for Lloyds' customers, answers the crucial questions regarding responsible AI & risk management today.

Discussion Points:

  • Are the risks of AI posing more challenges today than ever before?
  • Why do we need to be paying attention to mitigating these risks?
  • Any predictions on legislations?
  • What are you putting into place to manage these problems/risks?
  • Why are forums like this necessary, why should people be interested in attending?

Hear from an expert in this field, download this free interview and get involved in the cutting-edge conversations happening at the Responsible AI & Risk Management Summit!

[Video] The Importance of Responsible AI

[Video] The Importance of Responsible AI

Implementing Responsible AI is important not only for wider society, but also to foster trust in AI systems which is essential for their long-term success.

This panel will address:

  • Fairness - designing and training models on data sets that are as diverse as possible and emphasize fairness
  • Explainability - enabling users to gain insight into how decisions are made and how results are generated
  • Privacy – ensuring data is kept private and secure to protect users’ personal data and privacy
  • Trust – the potential for Generative AI to erode trust and the implications that has for business and society

Watch the video here >>