Speaker: Professor Lilian Edwards, Prof of Law, Innovation & Society, Newcastle Law School
Biography: Lilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985. She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law (Hart, 2018). She won the Future of Privacy Forum award in 2019 for best paper ("Slave to the Algorithm" with Michael Veale) and the award for best non-technical paper at FAccT* in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a partner in the Horizon Digital Economy Hub at Nottingham, the lead for the Alan Turing Institute on Law and AI, and a fellow of the Institute for the Future of Work. At Newcastle, she is the theme lead in the data NUCore for the Regulation of Data. She currently holds grants from the AHRC and the Leverhulme Trust. Edwards has consulted for inter alia the EU Commission, the OECD, and WIPO.
Title: 'Faithful or Traitor? The Right of Explanation in a Generative AI World'
Abstract: The right to an explanation is having another moment. Well after the heyday of 2016-2018 when scholars tussled over whether the GDPR ( in either art 22 or arts 13-15) conferred a right to explanation, the CJEU case of Dun and Bradstreet has finally confirmed its existence, and the Platform Work Directive has wholesale revamped art 22 in its Algorithmic Management chapter. Most recently the EU AI Act added its own Frankenstein-like right to an explanation (art 86) of AI systems .
None of these provisions however pin down what the essence of the explanation should be, given many notions can be invoked here ; a faithful description of source code or training data; an account that enables challenge or contestation; a “plausible” description that may be appealing in a behaviouralist sense but might be actually misleading when operationalised eg to generate a medical course of treatment. Agarwal et al argue that the tendency of UI designers, and regulators and judges alike to lean towards the plausibility end, may be unsuited to large language models which represent far more of a black box in size and optimisation than conventional machine learning, and which are trained to present encouraging but not always accurate accounts of their workings. Yet this is also the direction of travel taken by CJEU Dun & Bradstreet , above. This paper argues that explanations of large model outputs may present novel challenges needing thoughtful legal mandates.
Please note: this event is a hybrid event - it will take place in person in the Cambridge Law Faculty. For those who are unable to make it in person, please register to attend via Zoom.