Innovation Policy Colloquium

 

Professors Helen Nissenbaum and Katherine Strandburg
Spring 2017

Wednesdays, 2:10-4:00 pm, 245 Sullivan Street, Furman Hall, Room 210
Thursdays, 5:00-6:50 pm, 40 Washington Square South, Vanderbilt Hall, Room 208

LAW-LW.10930.001
3 credits

The Innovation Policy Colloquium focuses each year on different aspects of the law’s role in promoting creativity and invention.

This year, we will examine the legal and policy challenges raised by society’s increasing reliance on so-called “big data” in a broad range of public and private endeavors, such as targeted advertising, assessment of credit worthiness, urban management, healthcare, law enforcement and counterterrorism.

The Colloquium has two components. Class readings will explore issues such as privacy, equity, reliability, innovation and transparency from a variety of perspectives -- societal, legal, ethical, political, and humanistic. Open Colloquium sessions will feature works-in-progress presented by an interdisciplinary slate of leading scholars. Interested faculty, researchers and alumni will be invited to attend these sessions. Readings and discussion prior to each presentation will prepare students to participate fully in these sessions. Each Colloquium student is required to write an independent research paper on some aspect of the "big data" issue.

Spring 2017 Schedule of Presenters

Wednesday, JANUARY 25
Foster Provost,
Professor of Data Science and Information Systems, Andre Meyer Faculty Fellow, Information, Operations & Management Sciences Department, Stern School of Business
Guest Lecture: An Introduction to Data Science

Thursday, JANUARY 26
Cathy O’Neil, Data Scientist and Author
Guest Lecture: Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy

Wednesday, FEBRUARY 1
Duncan Watts, Principal Researcher, Microsoft Research, Founding Member, MSR-NYC lab
Guest Lecture: Computational Social Science: Exciting Progress and Future Challenges

Thursday, FEBRUARY 16
Julia Angwin, Author, Dragnet Nation, Journalist, ProPublica
Algorithmic Accountability
Abstract: Machines are making a lot of decisions that used to be made by humans. Machines now help us make individual decisions, such as which news we read and the ads we see. They also make societal decisions, such as which neighborhoods get a heavier police presence and which receive more attention from political candidates. Journalist Julia Angwin talks about the challenges of holding machines accountable for their decisions.
Reading: ProPublica Articles

Wednesday, MARCH 1 [Rescheduled from February 9]
Solon Barocas, Postdoc Researcher, Microsoft Research
Taking Explanation Seriously in Law and Machine Learning
Abstract: From scholars seeking to “unlock the black box” to regulations requiring “meaningful information about the logic” of automated decisions, recent discussions of machine learning— and algorithms more generally—have turned toward a call for explanation. Champions of explanation charge that algorithms must reveal their basis for decision-making and account for their determinations. But by focusing on explanation as an end in itself, rather than a means to a particular end, critics risk demanding the wrong thing. What one wants to understand when dealing with algorithms—and why one would want to understand it—can vary widely. Often, the goals that motivate calls for explanation could be better served by other means. Worse, ensuring that algorithms can be explained in ways that are understandable to humans may come at the cost of another value that scholars, regulators, and critics hold dear: accuracy. Reducing the complexity of a model, for example, may render it more readily intelligible, but also hamper its performance. For explanation to serve its intended purpose and to find its appropriate place among a number of competing values, its champions need to consider what they hope it to achieve and what explanations actually offer.

Thursday, MARCH 2
Martha Poon, Research Affiliate, The Committee on Global Thought, Columbia University
Borrowing to Pivot – What can Microsoft tell us about the financing of the cloud?
Abstract: Microsoft Corporation is a global giant, but even giants have to keep pace. After making its fortune in a humbler era of shrink-wrapped software, and maturing during the browser wars, the company will now pursue a data-driven business model. In June 2016, as part of a strategy to merge enterprise software with digital automation and machine learning, it announced that it would buy out professional social media site LinkedIn in a $26bn dollar all-cash transaction. Microsoft is borrowing to pivot. To finance the acquisition it sold $19.7bn in investment-grade bonds, backed by a corporate cash reserve of over $105bn. This paper will show that the tech sector is building up its ambitious cloud-computing infrastructure in a unique financial environment, in which a handful of tech corporations command an unprecedented hoard of cash as well as access to abundant, low-interest, debt financing. The research unearths a kind of Faustian pact between the global money markets’ insatiable need to produce tradable credit assets, a global multinational’s need to sustain shareholder value, and what some are calling ‘surveillance capitalism’. The paper argues that Silicon Valley’s potent new business model is a silent beneficiary of the post-2008 credit contraction. What if the cloud is a result of dynamics in financial markets, and not just of technical innovation? How would this challenge our approach to policy and regulation?

Thursday, MARCH 9
Mireille Hildebrandt, Professor of Law, Vrije Universiteit Brussel, Law Science Technology & Society (LSTS); Professor of Smart Environments, Data Protection and the Rule of Law at the Science Faculty, Radboud University, Nijmegen, The Netherlands
From Law as Information to Law as Computation
Abstract: The idea of computational law or jurimetrics stems from a previous wave of artificial intelligence. It was based on an algorithmic understanding of law, celebrating logic as the sole ingredient for proper legal argumentation. However, as Holmes noted, the life of the law is experience rather than merely logic. Machine learning, which determines the current wave of artificial intelligence, is built on machine experience. The resulting computational law may be far more successful in terms predicting the content of positive law. In this lecture we will discuss the assumptions of law and the rule of law and confront them with those of computational systems. This should inform the extent to which artificial legal intelligence provides for responsible innovation in legal decision making.

Thursday, MARCH 23 [Joint session with IILJ Colloquium]
The IILJ Colloquium “The International Law of Google” in cooperation with the Innovation Policy Colloquium will host Vijaya Gadde ’00, General Counsel & Director of Communications, Twitter, Inc. and member of the Board of Trustees of NYU Law for a conversation about social media, public discourse, and international law. Please RSVP, space is limited!

Thursday, MARCH 30 [CANCELLED]
Barbara Evans, Professor of Law, George Butler Research Professor, Director, Center on Biotechnology & Law, University of Houston Law Center
Consumer-driven Health Information Commons
Abstract: This article introduces consumer-driven health information commons, which are institutional arrangements to empower groups of consenting individuals to collaborate to assemble powerful, large-scale health data resources for use in scientific research on terms the group members themselves would set. Twentieth-century research and privacy regulations enshrined data-holders (hospitals, insurers, and other entities that store people’s data) as the prime movers in assembling large-scale health data resources for scientific use--an approach that, this paper argues, is in the process of breaking down and may be unable to meet the needs of twenty-first-century data science.  This article draws on the natural resource commons theory associated with Elinor Ostrom to propose an alternative approach that would place consumers—patients, research subjects, and persons who track their health using mobile and wearable sensor devices—at the center of efforts to assemble large-scale data resources.

Thursday, APRIL 6
Matthew Connelly, Professor, Department of History, Columbia University
Can We Use Artificial Intelligence and Big Data to Identify State Secrets?
Abstract: Whether officials can be trusted to protect national security information has become a matter of great public controversy, reigniting debate about the scope and nature of official secrecy. The declassification of millions of electronic records has made it possible to analyze these issues with rigor and precision. Using machine-learning methods, we can mine State Department cables to identify what kind of information is most likely to be classified, and what information is particularly likely to be "lost." This research confirms that there are longstanding problems of both overclassification and underclassification, which puts recent controversies in a new light. The most egregious scandal about secrecy is not whether this or that official mismanaged classified information, but that the government spends $15 billion a year protecting state secrets without knowing whether officials actually agree on what should be classified.

Thursday, APRIL 13
Elizabeth Joh, Professor of Law, U.C. Davis School of Law
The Undue Influence of Surveillance Technology Companies on Policing
Abstract: Conventional wisdom assumes that the police are in control of their investigative tools. But with surveillance technologies, this is not always the case. Increasingly, police departments are consumers of surveillance technologies that are created, sold, and controlled by private companies. These surveillance technology companies exercise an undue influence over the police today in ways that aren’t widely acknowledged, but that have enormous consequences for civil liberties and police oversight. Three seemingly unrelated examples--stingray cellphone surveillance, body cameras, and big data software—demonstrate varieties of this undue influence. These companies act out of private self-interest, but their decisions have considerable public impact. The harms of this private influence include the distortion of Fourth Amendment law, the undermining of accountability by design, and the erosion of transparency norms. This Essay demonstrates the increasing degree to which surveillance technology vendors can guide, shape, and limit policing in ways that are not widely recognized. Any vision of increased police accountability today cannot be complete without consideration of the role surveillance technology companies play.


Pending approval, 2 New York CLE credits in the Area of Professional Practice will be given to both experienced and newly attorneys (those admitted to the New York Bar for less than two years) and is presented in traditional (in person) format.

Questions about the Colloquium should be addressed to Nicole Arzt at nicole.arzt@nyu.edu or (212) 998-6013.