The current paradigm of Artiﬁcial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the ﬁeld’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of separate problems, recent cases of unexpected eﬀects of AI are the consequences of those very choices that enabled the ﬁeld to succeed, and this is why it will be diﬃcult to solve them. In this chapter we review three of these choices, investigating their connection to some of today’s challenges in AI, including those relative to bias, value alignment, privacy and explainability. We introduce the notion of “ethical debt” to describe the necessity to undertake expensive rework in the future in order to address ethical problems created by a technical system.
Autonomous mechanisms have been proposed to regulate certain aspects of society and are already being used to regulate business organisations. We take seriously recent proposals for algorithmic regulation of society, and we identify the existing technologies that can be used to implement them, most of them originally introduced in business contexts. We build on the notion of ‘social machine’ and we connect it to various ongoing trends and ideas, including crowdsourced task-work, social compiler, mechanism design, reputation management systems, and social scoring. After showing how all the building blocks of algorithmic regulation are already well in place, we discuss the possible implications for human autonomy and social order. The main contribution of this paper is to identify convergent social and technical trends that are leading towards social regulation by algorithms, and to discuss the possible social, political, and ethical consequences of taking this path.
In different areas of science, researchers have discovered that our behaviour can be used by intelligent algorithms to infer psychometric information about us, including emotions, attitudes, aptitudes, beliefs and more. How will that be used? (click below for paper)
Recent studies have shown that macroscopic patterns of continuity and change over the course of centuries can be detected through the analysis of time series extracted from massive textual corpora. Similar data-driven approaches have already revolutionised the natural sciences, and are widely believed to hold similar potential for the humanities and social sciences, driven by the mass-digitisation projects that are currently under way, and coupled with the ever-increasing number of documents which are “born digital”. As such, new interactive tools are required to discover and extract macroscopic patterns from these vast quantities of textual data. Here we present History Playground, an interactive web-based tool for discovering trends in massive textual corpora. The tool makes use of scalable algorithms to first extract trends from textual corpora, before making them available for real-time search and discovery, presenting users with an interface to explore the data. Included in the tool are algorithms for standardization, regression, change-point detection in the relative frequencies of ngrams, multi-term indices and comparison of trends across different corpora.
Description of data and methods here: History Playground: A Tool for Discovering Temporal Trends in Massive Textual Corpora Thomas Lansdall-Welfare, Nello Cristianini https://arxiv.org/abs/1806.01185 [to appear on Historical Methods]
Analysis of 150 years of British periodicals Thomas Lansdall-Welfare, Saatviga Sudhahar, James Thompson, Justin Lewis, FindMyPast Newspaper Team, Nello Cristianini Proceedings of the National Academy of Sciences Jan 2017, 114 (4) E457-E465; DOI: 10.1073/pnas.1606380114
An Analysis of the Interaction Between Intelligent Software Agents and Human Users Christopher Burr; Nello Cristianini; James Ladyman
Interactions between an intelligent software agent (ISA) and a human user are ubiquitous in everyday situations such as access to information, entertainment, and purchases. In such interactions, the ISA mediates the user’s access to the content, or controls some other aspect of the user experience, and is not designed to be neutral about outcomes of user choices. Like human users, ISAs are driven by goals, make autonomous decisions, and can learn from experience. Using ideas from bounded rationality (and deploying concepts from artificial intelligence, behavioural economics, control theory, and game theory), we frame these interactions as instances of an ISA whose reward depends on actions performed by the user. Such agents benefit by steering the user’s behaviour towards outcomes that maximise the ISA’s utility, which may or may not be aligned with that of the user. Video games, news recommendation aggregation engines, and fitness trackers can all be instances of this general case. Our analysis facilitates distinguishing various subcases of interaction (i.e. deception, coercion, trading, and nudging), as well as second-order effects that might include the possibility for adaptive interfaces to induce behavioural addiction, and/or change in user belief. We present these types of interaction within a conceptual framework, and review current examples of persuasive technologies and the issues that arise from their use. We argue that the nature of the feedback commonly used by learning agents to update their models and subsequent decisions could steer the behaviour of human users away from what benefits them, and in a direction that can undermine autonomy and cause further disparity between actions and goals as exemplified by addictive and compulsive behaviour. We discuss some of the ethical, social and legal implications of this technology and argue that it can sometimes exploit and reinforce weaknesses in human beings.
Our mode of thinking changes at different times of the day and follows a 24-hour pattern, according to new findings published in PLOS ONE. University of Bristol researchers were able to study our thinking behaviour by analysing seven-billion words used in 800-million tweets.
Researchers in Artificial Intelligence and in Medicine used AI methods to analyse aggregated and anonymised UK twitter content sampled every hour over the course of four years across 54 of the UK’s largest cities to determine if our thinking modes change collectively.
The researchers were able to reveal different emotional and cognitive modalities in our thoughts by identifying variations in language through tracking the use of specific words across the twitter sample which are associated with 73 psychometric indicators, and are used to help interpret information about our thinking style.
At 6 am, analytical thinking was shown to peak, the words and language at this time were shown to correlate with a more logical way of thinking. However, in the evenings and nights this thinking style changed to a more emotional and existential one.
Although 73 different psychometric quantities were tracked, the team found there were just two independent underlying factors that explained most of the temporal variations across the data.
The first factor, with a peak expression time starting at around 5 am to 6 am, linked with measures of analytical thinking through the high use of nouns, articles and prepositions, which has been related, in other studies, to intelligence, improved class performance and education. This early-morning period also shows increased concern with achievement and power. At the opposite end of the spectrum, the researchers find a more impulsive, social, and emotional mode.
The second factor has a peak expression time starting at 3 am to 4 am, the aggregated twitter content found this time to be correlated with the language of existential concerns but anticorrelated with expression of positive emotions.
Overall, the study discovered strong evidence that our language changes dramatically between night and day, reflecting changes in our concerns and underlying cognitive and emotional processes. These shifts also occur at times associated with major changes in neural activity and hormonal levels, suggesting possible relations with our circadian clock. Furthermore, the study revealed both cognitive and emotional states change in a predictable way during the 24 hours.
Professor Nello Cristianini, Professor of Artificial Intelligence and the project lead, said: “The analysis of media content, when done correctly, can reveal useful information for both social and biological sciences. We are still trying to learn how to make the most of it.”
Professor Stafford Lightman, Professor of Medicine and a neuroendocrinology expert at Bristol Medical School, and one of the study’s authors, added: “Circadian rhythms are a major feature of most systems in the human body, and when these are disrupted they can result in psychiatric, cardiovascular and metabolic disease. The use of media data allows us to analyse neuropsychological parameters in a large unbiased population and gain insights into how mood-related use of language changes as a function of time of day. This will help us understand the basis of disorders in which this process is disrupted.”