HartHouse

From Nuclear Weapons to Killer Robots: A Conversation on Science and Responsibility

HartHouse

On Thursday, March 19th the University of Toronto’s Hart House will host From Nuclear Weapons to Killer Robots: A Conversation on Science and Responsibility. Dr. Jason Millar (CRAiEDL Director) will be joining Dr. John Polanyi for a conversation about ethical responsibility in designing AI and autonomous technology. This is the final in a series of events celebrating Hart House’s 100th Anniversary.

Scientific and technological innovation is often seen as a purely positive feat. However, Dr. Millar and Dr. Polanyi will discuss the risks that often accompany innovation and responsible ways to manage and understand those risks. Since this technology will certainly become central to communities – both large and small – we must also begin to consider how everyone canbecome a part of the decisions that are propelling our world.

Join Dr. Millar and Dr. Polanyi for a discussion about the future of technology and responsible innovation.

WR2020

We (the North!) Robot 2020

WR2020

The Center for Law, Technology, and Society (CLTS) at the University of Ottawa will be hosting We Robot 2020, North America’s premiere international conference on law and policy relating to Robotics and AI. We Robot 2020 is co-chaired by Dr. Florian Martin-Bariteau, Director of the CLTS, CRAiEDL Director Dr. Jason Millar, and Katie Szilagyi, PhD Candidate in uOttawa’s Common Law program. 

We Robot 2020 will feature some of the world’s leading Robotics and AI researchers including scholars, policy-makers, regulators, and entrepreneurs who will discuss the latest developments with robots and AI in homes, hospitals, public places, and battlespaces. How will law and policy need to change to accommodate robots and AI? How will policy issues surrounding robotics and AI be tackled? We Robot 2020 will attempt to answer these, and other, questions. 

Approximately 200 of the world’s leading lawyers, engineers, policy-makers, social scientists, philosophers, roboticists, ethicists, and regulators who specialize in these topics will be in attendance. They will engage in interdisciplinary discussion and debate surrounding Robotics and AI to move forward the legal, ethical and engineering issues that governments, policymakers and industry are facing in today’s rapidly evolving robotics and AI ecosystem. 

Join us in Ottawa on April 2nd – 4th for We Robot 2020! Registration is open!

NavApp

CRAiEDL Members Rethink the Social Impact of Navigation Apps at the AI Summer Institute

NavApp

From July 22-25, 2019, an international multidisciplinary group of experts, graduate students and researchers gathered at the Alberta Machine Intelligence Institute (AMII) in Edmonton for the inaugural AI and Society Summer Institute, co-hosted by AMII, CIFAR, and UCLA School of Law’s AI Pulse Program. The purpose of the Institute was to examine the societal, ethical and policy implications of AI technologies through a unique experimental and experiential process aimed at producing tangible (and intangible) rapid outputs. 

Chaired by Edward Parson from UCLA School of Law, the first day included presentations from leading scholars, experts and researchers to frame the overall discussion and ensuing activities. Presenters included Dr. Jason Millar, CRAiEDL Director, who discussed “…new modalities for AI-assisted turn-by-turn navigation.” The issue with current navigation apps he argued is that they focus on very specific, and often narrow, value optimizations, including the “fastest” or the “lowest-cost” route. These options do not take into consideration other values held by the broader driving population who may have different needs or conditions that do not fit well within the limited scope of options offered by current apps. He suggested that the current values embedded into those apps, such as time efficiency, are overly narrow in the way they prescribe the user’s experience, and impact society as a result. According to Millar, we could embed alternative values to better align with a broader set of drivers’, and society’s values. 

Following the presentations, all of the participants were asked to brainstorm and ideate research questions to work on for the rest of the Institute that were original, would attract participants, and be realistic in terms of scope, given the timeframe and resources of the Institute. Ideas were written down on sticky notes and grouped on the wall. Participants whose suggestions proved popular among participants were asked to pitch in front of the entire group. Votes were eventually cast as to whether the rest of the participants would join them to create a smaller workgroup. In the end, eight projects were selected including Dr. Millar’s, which he successfully pitched on the premise of his presentation. 

Dr. Millar’s team – Nick Novelli, Anne Boily, Carlos Ignacio Gutierrez, Courtney Doagoo, Kathryn Bouskill, Elizabeth Wright, Brent Barron, Elizabeth Joh, Thomas Gilbert, Leilani Gilpin, Graham Taylor, Nicolas Rothbacher and Margaret Glover-Campbell – was highly interdisciplinary, including representation from engineering and computer science, anthropology, policy, and law. Using Design for Human Values, an ethical design framework Dr. Millar co-developed with researchers from the Center for Automotive Research at Stanford (CARS), the group interrogated the values that developers (both technical and non-) embed in the design of navigation technology. For example, while navigation apps may use preferences such as “fastest” or “cheapest” route with filters that “avoid highways” or “avoid tolls,” there are other values that we can include in the development of these apps. The workshop team decided to play with embedding an alternative value into turn-by-turn navigation, namely “cognitive load,” in order to demonstrate how different values can be used to reimagine both the driving experience, and the impact turn-by-turn navigation can have on society.

For the team, designing around cognitive load meant that the driver would have options available via the user ingterface that would help reduce their cognitive load while driving – in other words, reducing the mental stress it requires for the driver to get to their destination. The team first broke down into three sub-groups to determine all of the stakeholders that would have an interest in navigation and mobility. The sub-groups identified a range of stakeholders such as children, schools, individuals taking their dogs for a walk, city planners, parks and trees (yes, non-humans were considered stakeholders, too!). The next step was to determine how to embed values in the prototype mock-up – i.e., how do we translate these values into actionable filters that can be used to lessen the cognitive load while driving? 

The process then shifted to unpacking the meaning of cognitive load and understanding and agreeing on which drivers need to reduce it. The team used role-play scenarios to help them empathize with those drivers, for example, imagining what a tired driver would want to avoid while driving home. In the end, the filters incorporated into mock-up included those that would avoid highways, left-hand turns, winding roads, crosswalks, festivals, road closures and animal crossings. The group then created a mock name and logo – MOB.LY (short for mobility) – and a mock press-release was created to present it to the rest of the group.

A screenshot of a computer

Description automatically generated

The rapid outputs from the other Institute groups were extremely diverse and highly innovative. Take-aways from this incredible experience (team MOB.LY) include, among other things, the value of the Design for Human Values process for aiding in the responsible engineering of complex AI systems. This process allowed the team to demonstrate that there are always going to be different and competing values that developers have to confront when creating technology. The design process quickly led to questions about privacy, tracking and liability, that the team could examine for their impacts on a broad set of staskeholders. For example, one iteration of the mock-up included an option for the user to indicate their level of exhaustion. The team quickly pivoted away from this piece because of the concern of liability for the app and the user (how would the law treat an admittedly “tired” driver who got into an accident?). Lastly, when highly motivated, multidisciplinary experts work in a collaborative spirit toward a common goal, where ideas are fostered and shared, truly ambitious and creative outcomes are possible.

** Three members of CRAiEDL, Jason Millar (Director), Sophie Le Page (Ph.D. Candidate), and Dr. Courtney Doagoo (Fellow), were present at the 2019 Summer Institute.**