Open-source models have become foundational to enterprise AI, shifting legal risk from a one-time review to an ongoing operational challenge. At LinkedIn, Responsible AI now spans model documentation, traceability, and compliance with expanding requirements such as the EU AI Act and emerging U.S. state laws. In this session, Franklin Graves, Senior Counsel, Product and Data (AI) at LinkedIn, shares what has changed since last year, including how legal teams are operationalizing large-scale model reviews, how Responsible AI roles are evolving across the organization, and how agentic AI is beginning to help teams keep pace with growing complexity.
• Mapping open-source legal risk, including licensing, provenance, and downstream liability.
• Scaling model review through standardised documentation and audit-ready workflows.
• Using internal AI assistants to accelerate policy interpretation and Responsible AI program execution.