A recent Stanford University report told us not to fear artificial intelligence. What it didn’t tell us was whether we should fear its legal implications.

The people who study artificial intelligence for a living say the technology should not make us fear for our lives. This is the gist of the first in a series of reports from a Stanford University team about the effect AI will have over the next century. “The frightening, futurist portrayals of Artificial Intelligence that dominate films and novels, and shape the popular imagination,” the report reads, “are fictional.”

The report focused on how AI might affect the typical American city in 2030. The most sweeping changes laid out were on the transportation front, as self-driving cars are expected to become more commonplace. Other than that, AI will likely tweak, rather than overhaul, our lives with smarter apps, in-home gadgets like speakers and vacuums, and clinical recordkeeping.

Absent from a report focusing on short-term impacts of AI, however, is a thorough discussion of the technology’s legal implications. “Historically, artificial intelligence has not acted very directly on the world,” said Ryan Calo, a University of Washington law professor who is a member of the Stanford AI project. “It’s more like people use artificial intelligence to determine something or make a guess. As artificial intelligence acts more directly on the world by driving cars or trading stocks or doing surgery, then we would have to strike a different balance because there is more money on the line.”

Because AI has had such a modest impact on humanity, we have yet to see it bring about any significant legal action. Thus, the legal implications of artificial intelligence have, to this point, been almost entirely theoretical.

There are glimpses of what awaits us, though. Calo has devoted much of his research to legal issues surrounding robots, and these questions of liability have already arisen. Plane and car crashes involving autopilot features and accidents involving factory robots are precursors to what awaits us when AI goes awry. Unfortunately for us humans, judges have usually sided against humans instead of blaming software.

Early examples of AI are hedging liability by requiring human participation; Tesla, for example, requires a driver’s hands to be on the steering wheel for its autopilot to function. But the promise of AI is that it can take actions and make decisions without a human being’s direct consent.

“What I’ve puzzled about the most is, how are we going to treat actions undertaken by artificial intelligence that nobody really predicted would occur?” Calo said. “Maybe one system operating in isolation was harmless, but multiple systems interacting with one another … behave in ways that are unanticipated.”

As evidenced by the Stanford report, there’s not a tremendous amount of legal discussion happening on this front, which, to Calo, is a significant oversight. In 2014, Calo argued in a Brookings Institute white paper that the federal government should establish a robotics commission. Technologies of the past bred agencies to oversee them, he argued: Radio spawned what became the Federal Communications Commission, and the Department of Transportation’s roots lie in railroad regulation.

If AI’s possibilities are ever realized, it could have significant impacts across a wide legal spectrum. There’s plenty of discussion about job loss tied with AI (truck drivers, the most common profession in the U.S., are at particular risk), but little discussion about what happens when machines make mistakes the humans used to.

Calo brings up instances of AI generating contractual offers, or committing threats or defamation. “If all of these things were done by a person, they would subject that person to civil or criminal liability. But because it was done by an artificial intelligence, liability becomes more challenging to ascribe. You’re lacking intent in criminal, or in tort law you’re lacking foreseeability. That, to me, is the big challenge.”

Society also will have to grapple with what it will allow AI to do. A surgeon must undergo years of medical school before she can be allowed into an operating room, and an attorney must pass a bar exam before he can practice law. If artificial intelligence guides robots that can, say, perform routine surgeries or submit plea agreements, would we allow machines to handle these tasks? Would you read this article if it were written by an AI-infused bot, and would you question its ethics as you might mine?

Just as learning another language can help people better understand the grammar and syntax of their native tongue, artificial intelligence’s proliferation might show us what is truly valued in society. We might be dealing only with Tesla cars and smart vacuums at this point, but, Calo argues, that’s precisely the right time to begin discussing the social and legal elements of a science that will surely become muddier.

Find Out First
Stay connected with the latest
Eastside business news.
no thanks
Share!