Shortcuts to Artificial Intelligence — a Tale

Shortcuts to Artificial Intelligence

https://philpapers.org/rec/CRISTA-3

 

The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, recent cases of unexpected effects of AI are the consequences of those very choices that enabled the field to succeed, and this is why it will be difficult to solve them. In this chapter we review three of these choices, investigating their connection to some of today’s challenges in AI, including those relative to bias, value alignment, privacy and explainability. We introduce the notion of “ethical debt” to describe the necessity to undertake expensive rework in the future in order to address ethical problems created by a technical system.

On social machines for algorithmic regulation

Cristianini, N. & Scantamburlo, T.

On social machines for algorithmic regulation

AI & Society (2019).

https://doi.org/10.1007/s00146-019-00917-8

http://link.springer.com/article/10.1007/s00146-019-00917-8

 

Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of ‘social machine’ and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss the possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path.

A.I. and Human Autonomy

Is our autonomy affected by interacting with intelligent machines designed to persuade us?

An Analysis of the Interaction Between Intelligent Software Agents and Human Users
Burr, C., Cristianini, N. & Ladyman, J. Minds & Machines (2018) 28: 735.
https://link.springer.com/artic…/10.1007%2Fs11023-018-9479-0

Can Machines Read our Minds?
Burr, C. & Cristianini, N. Minds & Machines (2019).
https://link.springer.com/arti…/10.1007%2Fs11023-019-09497-4

Can Machines Read our Minds?

In different areas of science, researchers have discovered that our behaviour can be used by intelligent algorithms to infer psychometric information about us, including emotions, attitudes, aptitudes, beliefs and more. How will that be used? (click below for paper)

Can Machines Read our Minds?
Burr, C. & Cristianini, N. Minds & Machines (2019).
https://link.springer.com/arti…/10.1007%2Fs11023-019-09497-4

An Analysis of the Interaction Between Intelligent Software Agents and Human Users
Burr, C., Cristianini, N. & Ladyman, J. Minds & Machines (2018) 28: 735.
https://link.springer.com/artic…/10.1007%2Fs11023-018-9479-0

The History Playground

History Playground: A Tool for Discovering Temporal Trends in Massive Textual Corpora

Recent studies have shown that macroscopic patterns of continuity and change over the course of centuries can be detected through the analysis of time series extracted from massive textual corpora. Similar data-driven approaches have already revolutionised the natural sciences, and are widely believed to hold similar potential for the humanities and social sciences, driven by the mass-digitisation projects that are currently under way, and coupled with the ever-increasing number of documents which are “born digital”. As such, new interactive tools are required to discover and extract macroscopic patterns from these vast quantities of textual data. Here we present History Playground, an interactive web-based tool for discovering trends in massive textual corpora. The tool makes use of scalable algorithms to first extract trends from textual corpora, before making them available for real-time search and discovery, presenting users with an interface to explore the data. Included in the tool are algorithms for standardization, regression, change-point detection in the relative frequencies of ngrams, multi-term indices and comparison of trends across different corpora.

 

 

The History Playground is free to use here: http://playground.enm.bris.ac.uk

 

Description of data and methods here: History Playground: A Tool for Discovering Temporal Trends in Massive Textual Corpora Thomas Lansdall-Welfare, Nello Cristianini https://arxiv.org/abs/1806.01185 [to appear on Historical Methods]

 

Analysis of 150 years of British periodicals Thomas Lansdall-Welfare, Saatviga Sudhahar, James Thompson, Justin Lewis, FindMyPast Newspaper Team, Nello Cristianini Proceedings of the National Academy of Sciences Jan 2017, 114 (4) E457-E465; DOI: 10.1073/pnas.1606380114

 

 

Video and Paper on: AI and Human Autonomy

 

 

An Analysis of the Interaction Between Intelligent Software Agents and Human Users
Christopher Burr; Nello Cristianini; James Ladyman

Abstract
Interactions between an intelligent software agent (ISA) and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality (and deploying concepts from artificial intelligence, behavioural economics, control theory, and game theory), we frame these interactions as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction (i.e. deception, coercion, trading, and nudging), as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings.