Like most science fiction fans, most of my life has been a wait for two discoveries which will fundamentally change humankind's role in the world.
One is the discovery of alien life, which may not occur for millennia, if ever. The second is the development of artificial intelligence (AI), which might put me out of a job.
The fear of an AI which surpasses our own intelligence and imposes a "judgement day" on humanity permeates our culture and media. It is also a salient enough threat that Stephen Hawking, Elon Musk and dozens of AI experts penned an open letter one year ago this week through the Future of Life Institute, outlining steps for steering the development of machine learning away from a devastating evolutionary course.
Public administration research -- in particular, the study of organizational decision-making -- should be playing a major role.
Today, public organizations --both hierarchical and networked structures -- distribute public goods and services, with humans sitting at most of the decision nodes. These cops, budget directors, contract managers and case workers function in a rule-structured environment of human interactions. Yet they're still not that great at long-term sustainability, or working information-disadvantaged and marginalized citizens into authentic decision-making positions within government.
Public organizational goals such as efficiency are imbued with normative values. The humans at the decision-nodes may be expected most of the time to rank efficiency or effectiveness over secondary objectives such as social equity. This doesn't mean bureaucrats don't care about the disadvantaged; in fact, they are generally ethical. But they may also be sloppy in their rank ordering and inconsistent. With widespread distrust in government, AI may be eagerly deployed to reduce perceived bureaucratic incompetence in the future. Human decision-making nodes are likely to become scarce as machine learning advances. If we cannot engage disinterested or disadvantaged citizens in policymaking and implementation today, what will the chore look like when algorithms displace the administrators?
As Herbert Simon is famous for noting, humans have biases, sympathies and cognitive limits which force them to take mental shortcuts and make boundedly rational decisions. AI will be able to easily surpass its maker in this regard. Public organizations may be rendered far more efficient by removing corruptible and incompetent humans from public service delivery. Automated fleets may snow-plow our streets. Automated systems will record, ticket and debit the bank accounts of humans who still take manual control of their cars and speed. Public assistance programs will be fully automated, including eligibility determinations. Local government officials competing for economic development may turn to machines to improve strategic decision-making in the face of uncertainty. But how will a sentient budget system decide which schools to close in a contracting school district? How will an autonomous law enforcement drone apply ethical standards in a protest?
In a response to Musk's call for open-access to machine learning technologies, evolutionary biologist Suzanne Sadedin wrote last month that the competitive nature of organizational systems meant such a move would make it more likely that AI will "wipe out" itself and humanity.
Her argument is straight-forward: humans have been pretty good at competing for scarce resources, but machines will be better. This is the basic logic of Garrett Hardin's "tragedy of the commons." Agents have incentive to exploit a resource by taking more than their share when it is limited. Only humans have learned over thousands of years what happens at the small scale when you over-harvest a natural resource. Public and private organizations compete today globally and have never experienced a similarly scaled "tragedy of the commons," Sadedin writes. They simply have no evolutionary history to draw from, and their first tragedy may be the final one.
There is a middle ground.
Elinor Ostrom's work on Social-Ecological Systems has contributed much to our ability to "govern the commons." Inter-generational ethics, social equity and citizen participation are holy grails in public administration. Their importance should become more mission-critical as humans are removed from the decision-making processes. Our field -- sometimes frowned upon for its "explicitly normative" focus on ethics, equity and practical policy problems -- has produced a tremendous depth of knowledge to offer the birthing of AI.
As terrifying as AI-empowered oil companies and militaries might sound, why is it not possible to teach our machines like we would teach our children -- to cooperate, to trust and reciprocate?
Remember "WarGames?" Last year, researchers developed a learning program which teaches itself to play different Atari games. Other programs are demonstrating the building-blocks of creative thinking -- inductive, deductive and temporal reasoning. They can read books and answer questions about them. "Do Andoids Dream of Electric Sheep?" the sci-fi novelist Philip K. Dick once asked us. MIT and Microsoft researchers last year moved us a step closer to answering with a graphic network which could "dream" meaningful imagery.
On many fronts, 2015 showed us AI is developing mentally much like a child. What we teach these children is still very much up to us.
I work as an Assistant Professor at the O'Neill School of Public and Environmental Affairs at Indiana University Bloomington. There, I direct the MGMT Lab.