Spearheading AI Trustworthiness, Compliance, Governance, Ethics, Sustainability & Risk Management
The Practical & Strategic Home for Enterprise Leaders to Ensure Commercial, Technical & Competitive Success

16-18 September 2024  | Hotel Olympia, London, UK

Would you agree that the risks of AI are posing more challenges today than ever before? Why should the world be paying attention?

Certainly. As I recently explained in a blog published on the World Economic Forum’s AI Governance Alliance on #ResponsibleGenerativeAI kick off on September 19th, Generative AI doubles/ triples the pace of automation that AI initially triggered. While AI has massive positive impact in many sectors, it certainly also poses greater risks and challenges.

This has two important implications:

  • We need to accelerate the talent transformation, hiring, upskilling, reskilling efforts in line with the pace of technology.
  • We need to enhance our capabilities to maximize the opportunities and minimize the risks. These capabilities lie in operationalizing #ResponsibleAI, #ResponsibleData, #SustainableAI, #InclusiveAI, and #Cybersecurity

How has Generative AI changed the requirements for Responsible AI?

It has accelerated them considerably. Given most organizations are not even ready with #ResponsibleAI, we need to start / progress on our respective journeys and collaborate effectively as soon as possible. The changes or additional requirements that are due to Generative AI will need to be handled on a case-bycase basis, depending on industry and company.

What do you see are the primary risks?

In addition to the typical risks with traceability, explainability, etc., I see the biggest risk with our education systems and talent transformation that needs to happen urgently. In other words, most education systems in the world are a ‘rinse & repeat’ what has been taught in the past centuries or decades. We are not looking forward to 2030 or 2040 and then reverse-engineering the curricula for the needs of today’s students …so that they can deal with all this tectonic change coming our way in jobs and automation. Why? Students will need to find jobs. Currently, there is a massive mismatch between the students’ skills and employers’ needs. There is a similar challenge (and opportunity!) with upskilling and reskilling employees or adult learners. The expected Return on Investment (ROI) in AI may be risky for some businesses, especially if they lack the domain expertise and trained personnel.

What are the biggest challenges you are currently facing with improving Responsible AI in your business?

It is again talent. Responsible AI at scale needs to be based on a solid foundation of specialist skills and training. It also requires a mature culture of accountability. Since this cannot be achieved overnight, at Schneider Electric we are consistently upskilling our employees on matters of bias, ethical decision making, and accountability.

Any predictions for the upcoming legislation, and any advice on how enterprise leaders can best prepare to navigate it?

With regards to advice for leaders, I had written a Forbes article ‘Six Steps To Execute Responsible AI In The Enterprise’ based on my work on Responsible AI at Microsoft and Accenture in the last few years. I would suggest the following steps:

  • Accept RAI as an essential business function — similar to accounting or finance.
  • Enable broader acceptance and cultural change across your organization. Executive sponsorship will be key.
  • Form a passionate cross-functional team and build your RAI champion community.
  • Build your RAI operationalization plan.
  • Adopt RAI tools, toolkits, checklists and frameworks. Avoid reinventing the wheel where possible.

In terms of legislation, my view would be that we do our best to support and accelerate these efforts around the world but do not ‘outsource’ our responsibility. In other words, we use AI/ GenAI in our personal and professional lives already. The responsibility lies with every individual and team. If we see something wrong or suspect something could go wrong, we need to speak up and take action. We do not need to wait for law enforcement.

Why is this an important forum to you & your work?

As a fellow on WEF AI Governance Alliance, we need to partner and accelerate our efforts on Responsible AI and Responsible Generative AI across the board – public sector, private sector, academia... We need to share best practices, tools, frameworks, trainings… and benefit from all similar synergies. Thank you for bringing us together.

Download Interview Now

Return to Home