The U.S. landscape for AI adoption presents both challenges and opportunities as organizations work to implement responsible practices. In this session, panelists will discuss the current regulatory environment alongside practical approaches to responsible AI deployment.
Responsible AI isn’t a one-time effort—it requires end-to-end oversight across its entire lifecycle. This session explores how enterprises can manage AI systems responsibly from conception through decommissioning, ensuring ethical, compliant, and effective outcomes at every stage.
- Establishing governance frameworks for each AI lifecycle phase, ensuring accountability from design to decommissioning.
- Integrating continuous monitoring and updates to mitigate risks and address evolving regulatory requirements.
- Promoting transparency and ethical practices by embedding responsibility in every AI development and deployment step.
Implementing responsible AI governance is vital to balancing compliance with innovation in a rapidly evolving landscape. In this session, Amber and Gary will explore practical strategies for rolling out AI governance frameworks, including navigating global regulations and leveraging AI to enhance governance itself.
- Applying the NIST AI Risk Management Framework and align with the EU AI Act for compliance.
- Building a scalable, global AI governance program to address privacy, accountability, and customer concerns effectively.
- Using AI tools to streamline governance processes, driving efficiency and maintaining an innovative edge.
As AI rapidly evolves from predictive and task specific models to goal-driven agentic systems that can autonomously, make decisions, and even collaborate with other agents, the stakes for responsible governance have never been higher.
The panel will discuss:
Identifying and analyzing the right use cases is key to harnessing AI's transformative potential responsibly. This session dives into practical approaches for auditing use cases, assessing potential risks, bias, hallucinations and integrating AI into workflows, achieving efficiencies, and navigating challenges to deliver measurable business value.
- Evaluating workflows to identify high-impact use cases aligned with organizational goals and ethical considerations.
- Prioritizing AI initiatives based on feasibility, ROI, and alignment with responsible innovation principles.
- Establishing feedback loops to refine AI integration and adapt to evolving organizational needs.
The evolving regulatory environment presents both challenges and opportunities for businesses navigating Responsible AI. In this talk, Zachary will explore how companies can not only keep pace with regulations like the EU AI Act but also transform compliance into a competitive advantage, all while maintaining business viability.
Establishing and embedding a centralized approach to Responsible AI governance is key to driving consistency, accountability, and scalability in AI-driven enterprises. This session explores practical strategies to align governance frameworks with business goals, mitigate risks, and ensure sustainable AI innovation.
- Designing a centralized governance structure to unify Responsible AI policies, processes, and oversight across departments.
- Implementing clear accountability and reporting mechanisms to ensure compliance and build trust across stakeholders.
- Fostering a culture of continuous learning and ethical AI innovation through training, audits, and stakeholder engagement.
Large Language Models (LLMs) have the potential to revolutionize the insurance industry, offering new capabilities in customer service, risk assessment, and claims processing. However, they also present significant risks, such as hallucination, generating misinformation, perpetuating biases, and introducing privacy concerns. To address these challenges, GenAI solutions need to have built-in self-validation mechanism and are required to go through thorough validation process before deployment, serving as both a best practice to ensure quality of serve and a compliance measure to meet regulatory requirements.
In this presentation, Eugene, Manulife's Vice President and Global Chief Data Scientist, will explore challenges in building a reliable GenAI solution and innovative approaches in ensuring responsible AI in insurance applications, while Shone, Director of AI Validation and Governance, will share Manulife’s model risk management (MRM) practices for validating enterprise-wide generative AI solutions. Together, they will highlight the industry's commitment to developing AI systems that uphold transparency, accountability, and sustainability, using a multidimensional validation framework that addresses model risk, data privacy, cybersecurity, and ethical considerations specific to insurance operations.
Artificial Intelligence (AI) technologies have the potential to transform industries and businesses, but they also pose complex regulatory and ethical challenges. At Vanguard, we are committed to integrating responsibility at every stage of AI adoption and scaling. In this session, we will share our journey of building AI methodologies enabling responsible AI across the enterprise within AI development and implementation. We will discuss how a commitment to fairness, accountability, and transparency drives value while navigating these challenges.
In this session, we will cover the following topics:
1. Building scalable AI frameworks that prioritize fairness, transparency, and ethical decision-making in real-time.
2. Implementing training and inference monitoring to ensure that AI models are aligned with our values and principles.
3. Addressing the challenges of building and scaling responsible AI across the enterprise, including organizational, cultural, and technical considerations.
4. Sharing best practices and lessons learned from our journey, and discussing the future of responsible AI at Vanguard and beyond.
Join us for an insightful and engaging conversation about the importance of navigating the challenges of scaling responsible AI. We look forward to sharing our experiences and learning from the perspectives and insights of others in the field
Explore the critical implications of AI on employment and strategies for navigating this transformational shift. This panel discusses proactive approaches for businesses to openly discuss, understand & mitigate the impact on jobs while embracing AI innovation responsibly.
- Investing in reskilling and upskilling programs to empower employees for future roles in AI-driven environments.
- Fostering open communication channels to address concerns and ensure transparency about AI integration plans.
- Implementing inclusive AI strategies that prioritize human well-being and job retention alongside technological advancement.