Editor's note: This expert Q&A is part of our “AI is everywhere ... now what?” special project exploring the potential (and potential pitfalls) of artificial intelligence in our lives. Explore more topics and takes on the project page.
When it comes to rapid developments in technology, how can lawmakers plan to set ethical parameters before they become outdated?
For example, what do legal policies on AI look like when developed, and at which level of the legal system should they be addressed?
Gary Marchant, Regents and Foundation Professor of Law and faculty director of the Center for Law, Science and Innovation, is one of the leading experts working to answer these questions.
In 1984, the Sandra Day O’Connor College of Law at ASU was the first school in the U.S. to create a center like Law, Science and Innovation that focused on the intersection of science and law. Now, more than 30 years later, the center continues to be a leader in addressing emerging technology governance.
Marchant has also authored more than 150 chapters on legal issues related to rapidly advancing technology and is an an elected lifetime member of the American Law Institute and a fellow of the American Association for the Advancement of Science.
Here, he provides his perspective on AI and how lawyers can move forward with the rapidly changing technological landscape.
Question: When you began your career in law, did you see the possibility of AI growing to the extent of where it is today?
Answer: Not at all. AI was just something in science fiction. In fact, about 10 years ago, I had a law student who wanted to do an applied research project in which he would develop the syllabus and materials for an AI law course. I reluctantly agreed to supervise this, even though I thought it was a complete waste of time. To my surprise, just five years later, I went looking for — and found — that syllabus and binder of materials, because I was going to teach an AI law course. In just five years, the idea of teaching AI to law students went from a silly waste of time to a present reality.
Q: You’ve spoken about “soft law” with AI in your work. Could you expand on what this means?
A: Soft law encompasses any measure which sets forth substantive expectations that are not directly enforceable by government. So this includes things like private standards, codes of conduct, best practices, voluntary agreements, principles and ethical codes.
Q: How do you personally approach writing legal policy for AI and other fast-developing technology?
A: The key for me is the pacing problem, which is that technology will always move faster than any legal text. So the policy or legal prescription must be written to accommodate technology changes and be easily modified.
Q: What challenges do current or soon-to-be-lawyers face while these legislative policies are still being developed, and how can they best prepare themselves?
A: There is much uncertainty about the applicable legal requirements for AI, which creates challenges for lawyers to advise clients on relevant requirements. Congress is institutionally incapable of providing an effective legal framework, nor do we want them to try, because any statutes would be obsolete by the time the ink dries. It will therefore be the courts that decide legal issues for AI, but this process will take several years to resolve, so we must all live with uncertainty in the meantime.
Q: Can you discuss a recent study you and your colleagues have done looking at soft law mechanisms already in place?
A: We are currently doing a project looking at how soft law and hard law regulate a technology together, using autonomous vehicles as a case study. While people sometimes make arguments for hard law or soft law, in reality there will be some soft law and some hard law to govern any technology. A key question is, therefore, how do the soft law and hard law work together, and we are looking at several different models of how these two forms of governance might work together.
Q: Do you utilize AI in your classes, and if so, how?
A: Yes. In addition to discussing how artificial intelligence creates novel legal issues, I encourage my students to use AI to write their papers. I give them advice on how to use AI in a nuanced way as a personal assistant or co-author, rather than asking the AI to write the entire paper. The students who use AI properly produce very good papers — in fact, I now get very few poor student papers.
Q: What’s one thing you would like readers to take away regarding AI and legal policy?
A: The practice of law, like most professions, is being substantially affected — you might even say disrupted — by AI. But law is unique in that how it resolves AI issues will affect every other industry and use of AI, because it will set the ground rules by which AI must operate.
AI is everywhere ... now what?
Artificial intelligence isn't just handy for creating images like the above — it has implications in an increasingly broad range of fields, from health to education to saving the planet.
Explore the ways in which ASU professors are thinking about and using AI in their research on our special project page.
More Law, journalism and politics
Cronkite School launches Women Leaders in Sports Media live-learn program
Women in a new sports media program at Arizona State University got a solid game plan from a sports veteran at an Aug. 20 welcome event.“Be humble, be consistent and be a solver,” Charli Turner…
ASU center to host the Pursuits of Education and Excellence Symposium
The Center for the Study of Race and Democracy (CSRD) at Arizona State University is introducing the Pursuits of Education and Excellence Symposium as part of an ongoing initiative to commemorate the…
ASU journalism students dominate NATAS Student Production Award nominations
Students at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication dominated the nominations field of the prestigious Rocky Mountain Southwest Chapter of the National…