Robots and Robes: Automation and the Future of Legal Work

Jack Solowey
Staff Writer

Law students are accustomed to job competition. With the advent of technology like JP Morgan’s Contract Intelligence software, said to perform 360,000 hours of loan agreement interpretation in seconds, and with fewer errors than humans, we can now safely add robots to the ranks of our competitors. While existing technology threatens to automate nearly half of the labor market, according to a recent McKinsey & Co. paper, the high-level judgment at the heart of legal practice should protect the profession from complete automation.

If we sort jobs as routine vs. non-routine and cognitive vs. manual, routine manual and routine cognitive jobs are currently the most vulnerable to automation. Daron Acemoglu of M.I.T. and Pascual Restrepo of Boston University found that in routine and manual manufacturing, involving repetitive, labor-intensive processes, “for every robot per thousand workers, up to six workers lost their jobs.” The authors were surprised by the inability of openings in other sectors thus far to offset these losses.

Legal practice includes both routine and non-routine cognitive work, and the former is poised for increased automation. A 2016 paper by Dana Remus of UNC Law and Frank S. Levy of M.I.T. estimated that full implementation of existing automation technology would reduce lawyer hours by 13%.

Job reduction in routinized sectors of the law is, however, projected to be offset by increased demand for highly skilled, non-routine legal work requiring creativity, wisdom, and emotional intelligence, such as counseling clients, negotiating deals, and devising arguments.

Humanistic traits like imagination and empathy have been held up by thinkers like M.I.T.’s Erik Brynjolfsson and Andrew McAfee, authors of The Second Machine Age, as skills that won’t easily be replicated by machines. Jobs that require them – like counseling and caregiving – are thus said to be refuges for human capital.

More pessimistic observers, however, note that even jobs as EQ-intensive as physical therapy have been performed by an Xbox Kinect motion sensor and a monitor. Combine that technology with facial recognition software already said to detect pain in children, and it is possible to imagine that client-service professions will not be completely safe from automation.

There may yet be a domain, however, with no substitute for human work: value judgments.

Imagine the classic ethical thought experiment of the trolley problem. In this scenario, you are at the switch of a runaway trolley. If you do nothing, the trolley will hit one set of innocents, such as a group of schoolchildren. If you choose to act, your only option would be to flip the switch and divert the trolley into other victims, such as a group of senior citizens. Reasonable people disagree about the moral course of action or inaction here. Only the internal scales of our ethical preferences can answer such unsettling problems.

Autonomous vehicles like self-driving cars that are involved in accidents will face real-life versions of the trolley problem. Where decision parameters for identifying the optimal mid-accident route are programmed, they will necessarily entail ethical choices: is pedestrian safety weighted more highly than driver safety? Is the sheer number of pedestrians the criterion for choosing a collision course, or will factors like age, to the extent discernible, matter? Will programs opt for or against “playing god” – will a car stay on a higher-casualty collision course that involves fewer adjustments to its current path?

A recent panel at venture capital firm Andreessen Horowitz’s a16z Tech Policy Summit projected that making these moral judgments before decision parameters can even be programmed will create a new industry of “ethics as a service.” Consultants could advise engineers on how heavily to weight criteria such as predicted number of casualties in their self-driving software. While the field could create openings for moral philosophers, as bioethics did, the general analytical ability and specific fluency in concepts of duty, negligence, and reasonableness should create roles for lawyers as well.

Where local, state, and federal governments address the life or death consequences that will inevitably flow from programming self-driving vehicles, the core trolley problem dilemma will remain disconcerting. As authorities craft vague regulations to sidestep uneasy tradeoffs, lawyers will need to fill gaps and advise programmers on how to comply.

Defining parameters is a field where lawyers have a comparative advantage. Applying that skillset to value judgments may create billable hours well into the automated future.

What do you think?

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s