Intelligent Machines: a brief history (Parts 1-3)

Below is a series of three blogs (part 1, part 2, part 3) I wrote for Autonomy last year on the history of intelligent machines. This serves as an introduction to anyone curious about artificial intelligence and how it might shape the future of digital automation in work and society more generally.

AI 1Introduction

The notion of what constitutes intelligence and therefore what constitutes an intelligent machine has been widely debated throughout the history of Western thought. Descartes’ mind-body dualism, Marx’s humanist distinction between the intentionality of an architect versus the functionality of bee, and Allen Newell and Herbert Simon’s ‘Physical Symbol System’ hypothesis, which argued that any representational system “has the necessary and sufficient means for general intelligent action”, are just a few examples. Stories of something approximating an intelligent machine go back to the eighth century BCE in Homer’s Iliad. These self-moving machines or ‘automata’ were made by Hephaestus, the god of smithing, and were servants “made of gold, which seemed like living maidens. In their hearts there is intelligence, and they have voice and vigour”.[i] In De Motu Animalium, Aristotle essentially conceived of planning as information-processing.[ii] In developing ontology and epistemology he also arguably provided the bases of the representation schemes that have long been central to AI.[iii] The first edition of Russell and Norvig’s famous text Artificial Intelligence: A Modern Approach [iv] even shows the notation of Alice in Wonderland author Lewis Carroll[v] on Aristotle’s theory of the syllogism – the basis for logic-based AI – on the cover.

From Descartes to Turing

The idea that we can test machinic intelligence is nearly as old as the concept of intelligent machines. Writing in 1637, Descartes proposed two differences that distinguish human from machine in a way that is much more demanding than the Turing Test (see below):

If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that, they were not real men”.[vi]

The first test imagines a machine’s “being” established such that it can “utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs”. However, this machine cannot yet fully produce speech such that it could “reply appropriately to everything that may be said in its presence”. This is essentially the criteria for many contemporary artificial intelligences. The second test concerns situations in which machines can “perform certain things as well as or perhaps better than any of us can do”, yet fall short in others, which means that they did not “act from knowledge”, but rather only from “the disposition of their organs”. An intelligent machine can only pass both of Descartes’ tests if it has a functionality that is beyond a narrowly defined intelligence such that it has the capacity for knowledge. It must understand any given question enough to answer beyond programmed responses. This leads to the conclusion that it is “impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act”[vii].

Intelligent machines that approximate human understanding have yet to be produced. However, intelligent machines of a narrower type have existed – first virtually, then in reality – since Charles Babbage’s Analytical Engine of 1834. This machine was designed to use punch cards (an early form of computation) and could perform operations based on the mathematization of first-order logic. The Countess of Lovelace Ada Byron King – popularly known as Ada Lovelace – worked with Babbage and prophesised the implications of the algorithms that underpinned it. We can think of algorithms as a type of virtual machine or an “information-processing system that the programmer has in mind when writing a program, and that people have in mind when using it”[viii]. Ada Lovelace theorised virtual machines that formed the foundations of modern computing, including stored programs, feedback loops and bugs among other things. She also recognised the potential generality of such a machine to represent nearly “all subjects in the universe”, predicting that a machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”, though she could not say how[ix].

Advancements in mathematics and logic allowed for a breakthrough in 1936, when Alan Turing showed that every possible computation can in principle be performed by a mathematical system. This is now called a Universal Turing Machine[x]. Turing spent the next decade codebreaking at Bletchley Park during World War II and thinking about how this virtual machine could be turned into an actual physical machine. He helped design the first modern computer, which was completed in Manchester in 1948. Turing is usually credited with providing the theoretical break that led to modern computation and AI. In an unpublished paper from 1947, Turing discusses “intelligent machines”. A few years later Turing publishes his famous paper in which he asks, “Can a machine think?” and argues that machines are capable of intelligence. To make his case, he first constructs an “imitation game” or what is now known as the “Turing Test”, which continues to influence popular debates about AI[xi]. The test involves three people – a man (A) and a woman (B) who communicate through typescript with an interrogator (C) in a separate room. The interrogator aims to determine which of the other two is the man and which is the woman. Turing argue that the question “What will happen when a machine takes the part of A in this game?” should replace the original question “Can a machine think?”. The failure to distinguish between machine and human indicated the intelligence of the machine. Turing then goes on to consider nine different objections which form the classical criticisms of artificial intelligence. One of the most enduring is ‘Lady Lovelace’s Objection’, in which she argues that computers have “no pretensions to originate anything. It can do whatever we know how to order it to perform”[xii]. However contemporary “expert systems” and “evolutionary” AI have reached conclusions unanticipated by their designers[xiii]. Interestingly, a machine with a set of responses that happen to perfectly fit the questions asked by a human would pass a Turing test, but not pass Descartes’ test.

From Russell to MINDER

Following the innovations of Turing and Lovelace, the advancement of intelligent machines picks up speed from the 1950s into the 1970s in large part to three developments: Turing’s work, Bertrand Russell’s propositional logic and Charles Sherrington’s theory of neural synapses. In a famous paper titled “A Logical Calculus of the Ideas Immanent in Nervous Activity,” the neurologist and psychiatrist Warren McCulloch and the mathematician Walter Pitts combined the binary systems of Turing, Russell and Sherrington by mapping the 0/1 of individual states in Turing machines onto the true/false values of Russell’s logic, onto the on/off activity of Sherrington’s brain cells.[xiv] During this time a number of different proto-intelligent machines were built. For example, a Logic Theory Machine proved eighteen of Russell’s key logical theorems and even improved on one of them. There was also the General Problem Solver (GPS) machines, which could apply a set of computations to any problem that could be represented according to specific categories of goals, sub-goals, actions and operators.[xv] At the time, these intelligent machines relied almost exclusively on formal logic and representation, which dominated the early development of computing. Margaret Boden terms this type of artificial intelligence “Good Old-Fashioned AI” or GOFAI.

The binary systems synthesised by McCulloch and Pitts helped to catalyse the embryonic cybernetics movement, which emerged alongside the symbolic/representational paradigm discussed above. Cybernetics was coined in 1948 by Norbert Wiener, an MIT mathematician and engineer who developed some of the first automatic systems. Wiener defined cybernetics as “the study of control and communication in the animal and the machine.”[xvi] Cyberneticians examined a variety of phenomena related to nature and technology including autonomous thought, biological self-organisation, autopoiesis and human behaviour. The driving idea behind cybernetics was the idea of the feedback loop or “circular causation”, which allows a system to make continual adjustments to itself based on the aim it was programmed to achieve. Such cybernetic insights were later applied to social phenomena by Stafford Beer to model management processes among others. Wiener and Beer’s insights were used in Project Cybersyn – a pathbreaking method of managing and planning the Chilean national economy under the presidency of Salvador Allende from 1971-73.[xvii] However, as AI gained increasing attention from the public and government funding bodies, there began to be a split between two paradigms – the symbolic/representational paradigm which studied mind and the cybernetic/connectionist paradigm which studied life itself. The symbolic/representational paradigm came to dominate the field.

There were numerous theoretical and technological developments from the 1960s through to the present that provided the foundations for the range of intelligent machines that we rely on today. One of the most important was the re-emergence in 1986 of parallel distributed processing, which formed the basis for artificial neural networks, a type of computing that mimics the human mind. Artificial neural networks are comprised of many interconnected units that are each capable of computing one thing; but instead of computing sequential instructions based on top-down instructions given by formal logic, they use a huge number of parallel processes, controlled from the bottom up based on probabilistic inference. They are the basis for what is called “deep learning” today. “Deep learning” uses multi-layer networks and algorithms to systematically map the source of a computation, thus allowing it to adapt and improve itself. Another important development was Rosalind Picard’s ground-breaking work on “affective computing”, which inaugurated the study of human emotion and artificial intelligence in the late 1990s.[xviii] Marvin Minsky also influenced the incorporation of emotion into AI in considering the mind as a whole, inspiring Aaron Sloman’s MINDER program in the late 1990s.[xix] MINDER indicates some ways in which emotions can control behaviour, scheduling competing motives. Their approaches also inspired more recent hybrid models of machine consciousness such as LIDA (Learning Intelligent Distribution Agent), by researchers led by Stan Franklin.[xx]

What puts the ‘intelligence’ in Artificial Intelligence?

Today there are many different kinds of intelligent machines, with many different applications. In 1955, the study of intelligent machines is essentially rebranded as “artificial intelligence” via a conference at Dartmouth College during the summer of 1956.[xxi] In the proposal for the conference, the authors state that “a truly intelligent machine will carry out activities which may best be described as self-improvement”.[xxii] However, a single definition of artificial intelligence is difficult to adhere to, especially in a field rife with debate. For perspective, Legg and Hutter provide over seventy different definitions of the term.[xxiii] It has been variously described as the “art of creating machines that perform functions that require intelligence when performed by people”,[xiv] as well as “the branch of computer science that is concerned with the automation of intelligent behaviour”.[xv] One of the best definitions comes from the highly influential philosopher and computer scientist Margaret Boden: “Artificial intelligence (AI) seeks to make computers do the sorts of things that minds can do”.[xvi] Within this definition, Boden (2016, p. 6) classifies five major types of AI, each with their own variations. The first is classical, or symbolic “Good Old-Fashioned AI” (GOFAI mentioned in a previous post), which can model learning, planning and reasoning based on logic; the second is artificial neural networks or connectionism, which can model aspects of the brain, recognise patterns in data and facilitate “deep learning”; the third type of AI is evolutionary programming, which models biological evolution and brain development; the last two types, cellular automata and dynamical systems, are used to model development in living organisms.

None of these types of AI can currently approximate anything close to human intelligence in terms of general cognitive capacities. A human level of AI is usually referred to as artificial general intelligence or AGI. AGIs should be capable of solving various complex problems in various different domains with the ability of autonomous control with their own thoughts, worries, feelings, strengths, weaknesses and predispositions (Goertzel and Pennachin, 2007). The only AI that exists right now is of a narrower type (often called artificial narrow intelligence or ANI), in that its intelligence is generally limited to the frame in which it is programmed. Some intelligent machines can currently evolve autonomously through deep learning, but these are still a weak form of AI relative to human cognition. In an influential essay from the 1980s, John Searle makes the distinction between “weak” and “strong” AI. This distinction is useful in understanding the current capacities of AI versus AGI. For weak AI, “the principal value of the computer in the study of the mind is that it gives us a very powerful tool”; while for strong AI “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states”.[xvii] For strong AI, the programs are not merely tools that enable humans to develop explanations of cognition, the programs themselves are essentially the same as human cognition.

The Prospect of General Intelligence

While we currently do not have AGI, investment in ANI is only increasing and will have a significant impact on scientific and commercial development. These narrow intelligences are very powerful, able to perform a huge number of computations that would in some cases take humans multiple lifetimes. For example, some computers can beat world-champions in popular games of creative reasoning such as chess (IBM’s Deep Blue in 1997), Jeopardy (IBM’s Watson in 2011), and Go (Google’s AlphaGo in 2016). The Organisation for Economic Co-operation and Development [OECD], found that private equity investments in AI start-ups have increased from just 3% in 2011 to roughly 12% worldwide in 2018.[xviii] Germany is planning to invest €3 billion in AI research between now and 2025 to help implement its national AI strategy (“AI Made in Germany”), while the UK has a thriving AI startup scene and £1 billion of government support.[xxix] The USA had US$5 billion of AI investments by VCs in 2017 and US$8 billion in 2018.[xxx] The heavy investment in ANI start-ups and the extremely high valuations of some of the leading tech companies funding AGI research might lead to an artificial general intelligence in the coming years.

Achieving an artificial general intelligence could be a watershed moment for humanity and allow for complex problems to be solved at a scale once unimaginable. However, the rise of AGI comes with significant ethical issues and there is a debate as to whether AGI would be a benevolent or malevolent force in relation to humanity. There are also people who fear such developments could lead to an artificial super intelligence (ASI), which would be “much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”. [xxxi] With an increasingly connected world (referred to as the internet of things) artificial super intelligences could potentially “cause human extinction in the course of optimizing the Earth for their goals”.[xxxii] It is important, therefore, that humans remain in control of our technologies to use them for social good. As Stephen Hawking noted in 2016, “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which”.

Endnotes

[i] Homer, 1924. The Iliad. William Heinemann, London. pp. 417–421

[ii] Aristotle, 1978. Aristotle’s De motu animalium. Princeton University Press, Princeton.

[iii] Glymour, G., 1992. Thinking Things Through. MIT Press, Cambridge, Mass.

[iv] Russell, S.J. and Norvig, P., 2010. Artificial intelligence: a modern approach, 3rd ed. Pearson Education, Upper Saddle River, N.J;Harlow;

[v] Carroll, L., 1958. Symbolic logic, and, The game of logic : (both books bound as one), Mathematical recreations of Lewis Carroll. Dover, New York.

[vi] Descartes, R., 1637, 1931. The philosophical works of Descartes. Cambridge University Press, Cambridge.

[vii] Ibid., p. 116

[viii] Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. p. 4

[ix] Lovelace, A.A., 1989. Notes by the Translator (1843), in: Hyman, R.A. (Ed.), Science and Reform: Selected Works of Charles Babbage. Cambridge University Press, Cambridge, pp. 267–311.

[x] Turing, A.M., 1936. “On Computable Numbers with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, Series 2, 42/3 and 42/4., in: Davis, M. (Ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems, and Computable Functions. Raven Press, Hewlett, NY, pp. 116–53.

[xi] Nisson, N., 1998. Artificial Intelligence: A New Synthesis. Morgan Kaufmann, San Francisco.

[xii] Lovelace, A.A., 1989. Notes by the Translator (1843), in: Hyman, R.A. (Ed.), Science and Reform: Selected Works of Charles Babbage. Cambridge University Press, Cambridge, pp. 303.

[xiii] See Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. See also Luger, G.F., 1998. Artificial intelligence : structures and strategies for complex problem solving. England, United Kingdom.

[xiv] Mcculloch, W.S., Pitts, W., 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133. https://doi.org/10.1007/BF02478259

[xv] See Newell, A., Simon, H., 1956. The logic theory machine–A complex information processing system. IRE Transactions on Information Theory 2, 61–79. https://doi.org/10.1109/TIT.1956.1056797. See also Simon, H.A., Newell, A., 1972. Human problem solving / Allen Newell, Herbert A. Simon, Human problem solving / Allen Newell, Herbert A. Simon. Prentice-Hall, Englewood Cliffs, N.J.

[xvi] Wiener, N., 1961. Cybernetics : or, Control and communication in the animal and the machine, Second edition. ed. M.I.T. Press, New York.

[xvii] Medina, E., 2014. Cybernetic revolutionaries : technology and politics in Allende’s Chile. The MIT Press, Cambridge.

[xviii] Picard, R.W., 1997. Affective computing. MIT Press, Cambridge, Mass.

[xix] Minsky, M., 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Simon & Schuster, Riverside.

[xx] Baars, B.J., Franklin, S., 2009. CONSCIOUSNESS IS COMPUTATIONAL: THE LIDA MODEL OF GLOBAL WORKSPACE THEORY. International Journal of Machine Consciousness 1, 23–32. https://doi.org/10.1142/S1793843009000050

[xxi] McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E., 2006. A proposal for the Dartmouth summer research project on artificial intelligence: August 31, 1955. AI Magazine 27, 12.

[xxii] Ibid., p 14

[xxiii] Legg, S., Hutter, M., 2007. Universal Intelligence: A Definition of Machine Intelligence.(Author abstract)(Report). Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science 17, 391. https://doi.org/10.1007/s11023-007-9079-x

[xxiv] Kurzweil, R., 1990. The age of intelligent machines. MIT Press, London;Cambridge, Mass

[xxv] Luger, G.F., 1998. Artificial intelligence: structures and strategies for complex problem solving. England; p. 1.

[xxvi] Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. p.1.

[xxvii] Searle, J.R., 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3, p. 417. https://doi.org/10.1017/S0140525X00005756

[xxviii] OECD, 2018. Private Equity Investment in Artificial Intelligence (OECD Going Digital Policy Note). Paris.

[xxix] Deloitte, 2019. Future in the balance? How countries are pursuing an AI advantage (Insights from Deloitte’s State of AI in the Enterprise, No. 2nd Edition survey). Deloitte, London.

[xxx] Ibid.

[xi] Bostrom, N., 2006. How Long Before Superintelligence? Linguistic and Philosophical Investigations 5, p.11.

[xii] Yudkowsky, E., Salamon, A., Shulman, C., Nelson, R., Kaas, S., Rayhawk, S., McCabe, T., 2010. Reducing Long-Term Catastrophic Risks from Artificial Intelligence. Machine Intelligence Research Institute. p. 1

Platform Capitalism and the Value Form

Platform Capitalism and the Value Form

Image Credit Chris Koch

Reposted from Salvage Quarterly

According to the speculations of techno-futurologists, left and right, the machines are here to liberate us. Most of the discourse is dominated by the neoliberal right such as Erik Brynjolfsson and Andrew McAfee and Andrew Haldane, chief economist of the Bank of England. Their arguments, avoiding questions of exploitation, are naturally popular with the establishment. McAfee’s best-selling book The Second Machine Age has been lauded by leaders at the World Economic Forum.

On the left, however, Paul Mason welcomes our new robotic overlords, in an intellectual synthesis that spans Marx’s 1858 ‘Fragment on Machines’ (treated by Mason as a prophecy), Bogdanov’s 1909 novel Red Star and Martin Ford’s 2015 Rise of the Robots, not to mention Andre Gorz. Nick Srnicek and Alex Williams offer a more qualified welcome to the possibility of full automation and a workless future. But even the best of these analyses, and even the most alluring visions of networked insurrection and high-tech communist utopia, have to face up to how these technologies have been used, historically, to deepen exploitation rather than overcome it. It is far more likely, in short, that new technologies will intensify drudgery and further limit human freedom. And it on this basis that we have to evaluate the impacts of platform technologies on the capitalist mode of production.


 

In Platform Capitalism, Nick Srnicek provides one of the first systemic Marxist interventions into the discourse around data-driven digitalisation, automation and the future of work. ‘Platforms’ are ‘digital infrastructures that enable two or more groups to interact’ within the constraints of the capitalist system. ‘Platform capitalism’ does not simply refer to the rise of alternative work arrangements such as temporary, independent, or other forms of precarious labour contracts, but rather an organisational shift in the system as a whole due to financialisation, increasing inequality, and the tech boom. According to Srnicek, the evolution of internet-era behemoths like Google, Facebook, Amazon, and Uber, as well as radically modernised pre-internet companies like GE, Siemens, and Rolls Royce has fundamentally altered the landscape of capital accumulation and property relations between firms. It is important to remember that the US military and other state-funded bodies made much of the original technological innovations for computing and logistics. The emergence of platform capitalism is essentially the commercialisation and industrial maturation of data-based social relations, theorised in the 1980s as ‘information capitalism’. Does the emergence of platform capitalism constitute a new mode of exploitation? In order to address this question, we must situate the empirical fact of their existence within a historical and theoretical context.

The evolution of these firms is inextricably bound to the history of asset-price Keynesianism. From approximately the mid-1990s, bubbles in asset-prices temporarily drove investment and created jobs and growth where there would have been known. This was inaugurated with the dot.com bubble. During the economic boom of the 1990s, huge financial investments were poured into telecommunications infrastructure. Millions of miles of new cable and major advances in software and network design allowed for the commercialisation of the previously non-commercial Internet. After the dot-com bubble burst in 2001, the combination of financial deregulation and an ever-increasing demand for financial assets led to another crisis in 2007-8, triggered by complex mortgage-backed securities. The crisis response of central banks, including quantitative easing and the lowering of interest rates, weakened returns on the more traditionally secure financial assets. This encouraged investors to look toward other asset containers – mostly property and the tech sector or what would soon be known as the emerging platform economy. Largely as a result of this staggering amount of new investment, the technology and connectivity required to transform every day human activities into digitally recorded data became relatively less expensive and widely available. Srnicek claims that this marked the twenty-first century shift toward the period of ‘platform capitalism’ in which data collection and monetisation is standard business practice.

Platforms are defined by four attributes: they provide an infrastructure for mediating exchanges between different groups; they follow monopoly tendencies driven by network effects; they strategically cross-subsidise different parts of the business in order to diversify user groups; and they maintain a proprietary architecture that mediates interaction possibilities. These attributes are too broad to tell us anything about the mode of exploitation involved; however, Srnicek’s typology of ‘platforms’ is based on their methods of revenue-generation: advertising, cloud-based service, industrial production, product-rental, and lean or gigging hubs. Advertising platforms (Google, Facebook) extract information on user behaviour, analyse that data, and sell it to advertisers. Cloud platforms (Salesforce) own hardware and software that are rented out to digital-dependent (read: nearly all) businesses. Industrial platforms (GE) are modernised hybrids of traditional manufacturing and contemporary logistics that use proprietary hardware and software to provide services and lower production costs. Product platforms (Rolls Royce) also transform traditional goods into rented services by collecting fees for the use of their products. Finally, lean platforms (Uber, Airbnb, Deliveroo) outsource all asset ownership other than software and data analytics, then profit as digitally savvy middlemen disrupting established markets (the impact of which will be discussed in more detail below). Each type of platform often combines one or more revenue models to make a profit; however, the most important asset for platforms is their intellectual property – company software, algorithms, and user data.

The reliance on diverse revenue models raises questions firstly about the structural position of platforms in the overall circuit of capital accumulation, and secondly about whether in the future we will continue to regard the worker as central to production. To address this, it is important to understand the rise of the platform in relation to different forms of exploitation or means of profit-making. The late twentieth century produced a procession of post-capitalist prophets who sought support in Marx’s writings, going back to the Grundrisse, to justify the idea that the workers of the world might eventually ‘step to the side of the production process instead of being its chief actor’. As Tessa Morris-Suzuki argued in the 1980s, the exploitation of surplus labour as the primary source of profit was not, according to Marx, ever intended as an eternal economic law. It was the defining characteristic of industrial capitalism, a particular historical system that evolved out of merchant capitalism, which in turn evolved from feudalism. Marx established that at the most abstract level, aggregate profit is essentially the monetary expression of aggregate surplus value; however, within the circuit of capital, firms can also accumulate profits through unequal exchange and redistributive phenomena between social spheres. Profiting through the former mode is called ‘primary exploitation’ while profiting through the latter is called ‘secondary exploitation’ (this distinction will be further elaborated below). Does the rise of the platform simply indicate a shift from primary to secondary exploitation, or does it represent a new mode of exploitation entirely? Has labour ceased to be the main source of surplus value?


 

In Platform Capitalism, Srnicek offers an innovative framework through which to address this question in his conception of data as ‘raw material’. Data is defined as ‘information that something happened’. It is distinguished from knowledge or ‘information about why something happened’. The act of recording data is either labour carried out by a human or a function of a human-programed computer algorithm – or, often, both. The production of data thus relies on labour power and a material infrastructure. Data is the ‘raw material that must be extracted’ from the ‘activities of users’ which are the natural source of this raw material. For Marx, raw materials are those parts of nature that have been filtered through previous labour (for example, ore that has been extracted from the earth). Nature is any environment that can exist independently of humanity and serves as the ‘universal subject of human labour’. For example, water is found in nature; yet when it is separated from a river, filtered, and stored in tanks, it serves as a raw material. Srnicek extends this Marxian distinction beyond flora and fauna into the realm of human activity itself. Nature becomes any potential activity humans perform in their daily lives: economic transactions, consumer tastes, user movement, location, and so on. The mining and processing of these activities as data transforms it into the raw material, which can be used in the production of service commodities.

The production of service as a commodity is just like the production of a good as commodities, except for the fact that in the production of service commodities, use-value might vanish with the cessation of the labour-power itself due to the relative simultaneity of production and consumption. We should remember that for Marx, the commodity, as a materialisation of labour in the form of its exchange value, is an imaginary and ‘purely social mode of existence’, which has ‘nothing to do with its corporal reality’. All that matters is that the labour process is subsumed into the capitalist form of primary exploitation. Primary exploitation takes place in the labour process itself and can be productive of surplus value, which is translated into profits through markets and competition. It is the human and social process of ‘exploitation of the workman’ by the capitalist, which relies on a classed monopoly of power over the means of industrial production. The rate of surplus value extracted in the labour process is ‘an exact expression for the degree of exploitation of labour-power by capital, or of the worker by the capitalist’. It is important to note that the rate of surplus value is not an expression of the ‘absolute magnitude’ of the exploitation because not all exploited labour produces surplus value. Platforms can profit from exploitation of surplus labour yet not produce any new value because that labour is socially unproductive. Yet, they can also profit through the production of surplus value.

Harry Braverman, writing in 1974, pointed out that when a worker does not offer labour ‘directly to the user of its effects’, but rather sells it to a capitalist, who re-sells it on the market, this is ‘the capitalist form of production in the field of services’. Many services, such as education and health care can also be productive of surplus value if they take a capitalist form. Marx himself referred to the transport industry as a service that was productive of surplus value. As service producers, platforms like Deliveroo are value-productive since the commodity produced is the change of place itself. Amazon provides a similarly productive service in their logistics centres. Google and Facebook sell advertising space and consumer behaviour data. These platforms are arguably a further development of a longer trend away from producing goods as commodities toward producing services as commodities. Over the past several decades, nearly all developed economies have seen a gradual decline in manufacturing production and a rise in services as a share of employment and GDP. The UK in particular has seen a sharp rise, with ‘services’ now comprising 79% of value-added GDP.

Cloud, and product platforms do not produce services as commodities; rather, they accumulate profits in the form of rents or other means. They are not productive of value and signal a potential shift from primary to secondary exploitation in profit making. Secondary exploitation or ‘profit upon alienation’ takes place primarily through financial and property relations that facilitate the collection of interest payments, rents or profits through unequal exchange (merchant’s capital). This form of exploitation appropriates surplus labour performed elsewhere and in doing so, merely redistributes a portion of the total surplus value of society. The existence of secondary exploitation allows for two things: first, it explains how commercial or financial capitalists can profit from non-capitalist spheres without the creation of new value; and second, it allows Marxian economics to account for the difference between the sum of profits and the sum of surplus values that emerges as values are transformed into prices. This is because profit is not the same as surplus value, though the rate of each tends to equalise over time.

The developmental arc of a successful platform generally begins with the technological disruption of an existing industry, and ends with the platform achieving the status of an industry gatekeeper or quasi-monopoly status. As platforms expand, they capture an increasingly large amount of data. Their quests for gatekeeper status lead them to diversify and encroach on one another. The rapid expansion of platforms has resulted in new monopolies, which now provide the basic digital and logistical infrastructures upon which much of the economy operates. The increasingly privatised ownership and management of public services and business infrastructure is indicative of the aforementioned shift from primary to secondary exploitation.

The enclosure of electronic ecosystems is a particularly interesting instance of secondary exploitation. Facebook has pursued a strategy of ‘funnelling of data extraction into siloed platforms’ in Africa and other lesser-developed areas of the world. Their ‘Free Basics’ program has brought Internet access to over 37 countries and 25 million people; however any service other than Facebook that wants access to these users is required to partner with the company and operate through their network and software platform. This combines monopoly (a sole producer) and monopsony (a sole buyer) power to reproduce the exploitative dynamics of accumulation by dispossession. Through these programs, the extractive apparatuses of imperialism have found their contemporary counterpart in the global enclosure of digital infrastructure and mining of data. Facebook’s ideology of connectivity as a good in itself simply serves the company’s interest and reproduces the exploitation of the economic periphery.

Each platform’s relation to a given mode of exploitation ultimately depends on its concrete form. Advertising platforms like Google and Facebook as well as lean platforms like Uber and Deliveroo use their intellectual property to mine data as raw material, which becomes one of the elements of constant capital in the selling of service commodities. Advertising platforms sell access to billions of users through sophisticated communication and consumer behaviour patterns, while lean platforms sell their particularly efficient means of transportation and user base and profit through a combination of fees. They also tend to rationalise informal economies of petty commodity producers and consumers into a formal economy mediated through proprietary means. Industrial platforms have a more traditional means of profit-making. Cloud and product platforms like Salesforce or Rolls Royce primarily extract rents from the use of proprietary technologies and infrastructure. Each platform uses a combination of primary and secondary means of exploitation to make a profit.


 

It is difficult to abstract from the concrete relations of the labour process. However, the classification of concrete human labour and its place in the industrial circuit of capital allows us to understand the relation between the production of value one the one hand, and the mere of accumulation profit on the other. It is for this reason that the analysis of platforms must avoid fatalistic metaphysical claims like Negri’s insistence that ‘there is no outside to capitalism’. Accounts of socio-economic phenomena that flatten distinctions between labour and non-labour under capitalism (for example, Beller’s claim that ‘looking is labour’, that merely glancing at an advertisement is productive of surplus-value), or between industrial capitalist and merchant or financial capitalist relations, serve to obscure rather than clarify the underlying processes. Contra autonomist-inspired approaches, which have tended to characterise all activities as potentially free labour, our approach retains labour’s specific meaning in relation to capital, which is neither omniscient nor omnipresent. As Srnicek reminds us, it is precisely because of the fact that, ‘most of our social interactions do not enter into a valorisation process’ that companies are competing to build platforms and capture monetisable data. This is a crucial point. On the one hand, the individual activities of users cannot be classified as free labour, since they are ‘naturally occurring’ and become ‘raw material’ through recording and processing. If it were a source of surplus labour, Srnicek points out, capitalism would have discovered an abundant new frontier of value, resulting in a global boom that shows no sign of appearing. On the other hand, the activities of those who produce the means of extraction and process those raw materials – those who design the user interfaces, write the algorithms, package and sell the analytics – can be classified as labour.

Of the five types of platform, the lean platform has had the most visible and immediate effects on the labour market and workers. Lean platforms have expanded rapidly in sectors that require intensive non-routine manual labour, which is notoriously difficult to automate (this is also one reason why service-sector productivity continues to lag behind that of manufacturing). Lean platforms essentially extend the low-tech model of temp-agencies or informal networks of day-labourers into really subsumed and digitally-mediated service sectors. They offer a private technological ‘fix’ to labour market precarity, taking advantage of job polarisation and the displacement of individuals into the relative surplus population. Companies aim to classify workers as independent contractors and pay piece-rates whenever possible, because they ensure a specific rate of exploitation for service-commodities that hourly wages cannot. Lean platforms reliance on low margins and the pursuit of market expansion over short-term profitability means that workers bear the brunt of any problems. Uber and Deliveroo have had disputes with their workers, who have repeatedly challenged the companies over poor treatment, contracts and wages. Uber’s recent dispute with Transport for London over its contemptuous attitude to regulations and the law is another manifestation of the same model of accumulation.

The broader effects of lean platforms on the labour market are a controversial topic of debate. Some organisations have significantly downplayed the importance of non-traditional work arrangements through digital platform-based enterprises, while others have hailed them as significant. In an online survey of 2,238 UK adults aged 16-75, the Foundation for European Progressive Studies [FEPS] found that 21 per cent of respondents (proportionally equivalent to 9 million people) tried to find work on platforms during the past year and 11 per cent of respondents (4.9 million people) were actually successful in doing so. Based on these results, The Work Foundation claimed that a reasonable estimate of the proportion of the workforce finding jobs through digital platforms would be between 5 and 6 per cent. However, recent research from the McKinsey Global Institute revealed that 20 to 30 per cent of the working-age population in the United States and the EU-15, or up to 162 million individuals, are engaged the ‘independent work’ typical of ‘gigs’ provided by lean platforms. Of the individuals in the McKinsey survey, 30 per cent had chosen their work and derived their primary income from it; 40 per cent had chosen independent work to supplement their income; 14 per cent preferred a standard employment relationship, but were primarily independent workers; and 16 per cent engaged in supplemental independent work out of pure necessity. It is important to note that nature of gig-working may lead to under-reporting and there is a large amount of overlap between different job categories, which makes it difficult to compare platform-based gig working with traditional employment. However, what is clear is that, regardless of the extent of these changes, they constitute an acceleration of existing trends toward casualization and precarity.

At the moment, huge amounts of venture-capitalist investment into technology, automation, and artificial intelligence means that firms like Amazon and Uber can continue expanding without actually making a profit. With the rise of platform capitalism, there is a strong possibility that we will see a corresponding rise in the organic composition of capital i.e. a larger share of constant capital or the inert elements (tools, materials, equipment) compared to variable capital or living labour. In Capital vol. 3, Marx argues that this has direct implications for industrial profitability, which might explain the move by many platforms away from service models of production, towards a model that allows them to profit through collecting rents from the use of their infrastructure or appropriating a share of profits from other sectors. This is not sustainable; however it is likely too early to tell what this means for the future. On the one hand, advertising platforms like Facebook show no signs of slowing down. Facebook reported second quarter net income for 2017 at $3.89bn, a 71 per cent increase compared with the previous year. On the other hand, the lean platforms that have driven the rise of the gig economy are already showing signs of slowdown. JPMorgan Chase Institute has found that participation in labour platforms has levelled off and that workers’ monthly earnings from labour platforms have fallen by 6 per cent since June 2014 as a result of wage cuts and lower participation. Despite these findings, some of the world’s leading think tanks are recommending that ‘a platform strategy and the business know-how to exploit it is more important than ‘owning’ an ecosystem’. By 2018, the International Data Corporation predicts that more than 50 per cent of large enterprises will either create or partner with industry platforms and that the number of industry clouds will reach 500 or more by 2018. Time will reveal the veracity of this claim, but the shift toward a rentier form of accumulation through secondary exploitation shows no signs of stopping.

The conceptual development of the term ’platform’ as a new type of firm that relies on the strategic mining of data is a useful contribution to both Marxian and technological discourses that marks a novel economic phenomenon. However, we should be wary of claiming that this is a new mode of exploitation within capitalism. Most of the industry leaders in the platform economy don’t actually produce anything other than a means to profit from proprietary advantage or the sale of advertising and commodifying social data. Rather than signalling a fundamental shift in production and the condition of possibility of a technological utopia, they actually represent a regressive shift back toward what Marx referred as ‘antediluvian’ forms of accumulation i.e. secondary exploitation.

The evidence indicates that, contra the digital dreams of liberal Californian ideologues or post-capitalist utopians, platform capitalism will not provide the technological impetus to a future free of exploitation and drudgery. It might not even provide the robotic libertarian future romanticised by the Silicon Valley entrepreneurs. Sales of industrial robots to the UK have fallen between the period of 2014-2015 while only 14 per cent of business leaders are investing in AI and robotics. Ultimately, many low-margin service platforms will fail over the next few years; monopoly tendencies and cross-subsidisation will push other firms into luxury markets providing expensive convenience on demand; and those remaining will be forced to amalgamate their model into more traditional business models that rely on product or industrial platforms. The rise of platforms may inspire technological utopian rhetoric, yet it retains the same basic forms of twenty-first century global capitalism. Unless we collectivise and ‘nationalise the platforms’, changing their very form, there is little hope for a utopian future.

platform art

 

A Note on Fieldwork

MI-BW-HO-BR-AH-01

A key element of the fieldwork involves the contrasting narratives of management and workers with regard to conflict and cooperation in the workplace. Through exploring contrasting accounts between workers and mangers at the sectorial level, I will be able to articulate the politics of ‘service’ production in the workplace. From the data, I hope to be able to detail different perceptions of the labour process, as well as the modes of conflict and cooperation.I hope to build on materialist theories of the ’structured antagonism’ as well as the political dimensions of the value-form literature.

With regard to workers’ experience in the hospitality industry, I’ve found the prime mover of the employment relation is whether the contracted staff are agency or in-house. The secondary factor which determines different conditions are then between staff who receive remittances from the trunc system and those that don’t. The trunc system is a major source of conflict. Despite working for different companies, workers across the industry have remarkably similar conditions and issues. Each interviewee has so far given both a portrait of their workplace, as well as highlighted key conflicts over the course of their employment. It is clear that the rhetoric and strategy of managers contrasts with many of the accounts from workers themselves. However, this is most stark when they are a member of a union. Unionised workers often tell me that they speak up and aren’t afraid to say when things aren’t right. Most of the nonunion workers I’ve talked to try to adopt the views of management and often internalise them – the ’new spirit of capitalism’ is relevant here as well as Hochschild’s ‘managed heart’.

At the professional level, workers’ dissenting narratives are often missing. For example, the British Hospitality Association – the main industrial lobbying body in the UK – provides a wealth of literature on the industry and argues for its economic significance for the national economy. However, they ostentatiously omit accounts of the reality of work for most nonsupervisory and non managerial staff. Participants’ accounts of workplace tensions and the various tactics that mangers use to suppress dissent – from intimidation to wage theft – will be used as a counterpoint to the managerial narratives that I have collected so far, which fail to recognise the same problems at work. Most managers give a fairly glossy picture of their workplace despite the fact that the industry is plagued by violations and low pay.

A research agenda for Marxian conceptions of value and the political economy of ‘service’ work

A research agenda for Marxian conceptions of value and the political economy of ‘service’ work

Currently, there is resurgence in scholarship on Marxian conceptions of value. However, much of the discourse has remained within the realms of heterodox economics, political economy, and philosophy. I would like to set out a new line of inquiry, which shifts the aims of this research from the abstract and quantitative toward the concrete and qualitative. Following this line, we will investigate aspects of the Marxian conception of value in relation to the way capitalism actually functions. This entails combining detailed examinations of the real-world experiences of work with a value-[in]formed political economic critique of employment relations.

Marx’s theory of value gives us a tool for understanding the dynamic process of capitalist exploitation that overcomes the fragmentation of that experience. To quote Diane Elson:

What Marx’s theory of value does is provide a basis for showing the link between money relations and labour process relations in the process of exploitation. The process of exploitation is actually a unity; and the money relations and labour process relations which are experienced as two discretely distinct kinds of relation, are in fact onesided reflections of particular aspects of this unity. Neither money relations nor labour process relations in themselves constitute capitalist exploitation; and neither one can be changed very much without accompanying changes in the other… Marx’s theory of value is able to show this unity of money and labour process because it does not pose production and circulation as two separate, discretely distinct spheres and does not pose value and price as discretely distinct variables. (‘The Value theory of Labour’ in Value: The Representation of Labour in Capitalism: p. 172)

Through the use of a value-form analytic, scholars and activists alike will be able to develop novel political insights into contemporary relations of production and reproduction, as well as conceptualise emergent forms of work – from ‘services’ to creative industries.

Lines of inquiry might include:

  • accounting for productive and unproductive labour
  • the relations of concrete and abstract labour
  • differential and absolute ground rent and labour
  • the meaning of ‘services’ and ‘deindustrialisation’
  • aesthetic/affective labour and value
  • knowledge/intellectual labour and value
  • the value-form and global value chains
  • the value-form and the social construction of work
  • the value-form and the labours of reproduction
  • financialisation, value, and employment relations

———————————————

The following is a working bibliography of articles and texts that I’ve found helpful in understanding Marxian conceptions of value in relation to work:

Articles:

Books:

My research: ‘The Politics of Service Production’

My research: ‘The Politics of Service Production’

My research aims to investigate the politics of service production in the UK hotel sector through exploring experiences of work, specifically with regard to labour processes and management. As service industries have come to dominate many advanced economies, studies of work have increasingly moved toward the areas of health, finance, and education. However, with few exceptions, hotels and restaurants remain under-researched areas in employment relations, especially given their growing importance to certain European economies. Hotels are the empirical focus of this study because they represent a microcosm of the variety of occupations that comprise service industries – from financial management to customer assistance, food preparation and room cleaning. There is a strong public interest in researching the experiences of work in hospitality. Hospitality is Britain’s fastest growing industry and currently the fourth largest industry by employment. However, it also has a higher rate of low-paid work than any other UK industry (BHA 2011). Much of this expansion is due to significant rises in tourism and migration to the UK in recent years. These factors may have profound consequences for the shape of the economy and work in the UK.

To understand the nature of work in hospitality industry, it is essential that research directly engages with workers themselves. The study therefore follows in the methodological tradition of widely-respected workplace ethnographies, which have produced classic texts by authors including Michael Burawoy (1979), Huw Beynon (1973), Glucksmann (1982), and Pollert (1981). Workplace ethnographies are an established method of data collection in this field and are essential for studying certain aspects of work. This approach can reveal nuances and complex social phenomenon, such as worker resistance, which conventional survey techniques and formal interviews typically fail to uncover.  The fieldwork for the research therefore entails an industry-wide survey in London based primarily upon participant observation and semi-structured interviews with workers and management. However, it also draws on a variety of other sources including academic literature on the service sector, Marxian political economy, and union archives. There are two stages to the research. The first involves ethnographic participant observation, while the second involves interviews with a cross section of workers and mangers. The research addresses the primary question: “How do labour processes shape the experience of work in UK hotels?” There are two secondary questions: “How does the labour contract mediate the politics of work in UK hotels?” and “How do labour processes and the politics of work in the hospitality sector reflect broader changes in the UK economy in the context of ‘deindustrialisation’?” Through addressing these questions, I plan to foreground key aspects of work in the hospitality industry, while connecting the politics and experiences of work to wider socio-economic dynamics of capitalism in the UK. The outcome of this research will be a detailed understanding of the politics of work and management for the lowest paid workers in the fastest growing industry by employment in the UK.

References:

Beynon, H. (1985) Working for Ford, London: Pelican.

BHA (2011) Hospitality: driving local economies. A Report by the British Hospitality Association. http://www.bha.org.uk/wordpress/wp-content/uploads/2013/08/ENGLAND-HOSPITALITY-DRIVING-LOCAL-ECONOMIES-REPORT-FINAL-OCT-11.pdf.

Burawoy, M. (1979) Manufacturing consent: changes in the labor process under monopoly capitalism, Chicago; London: University of Chicago Press.

Glucksmann, M. (1982) Women on the Line. Routledge, London.

Pollert, A. (1981) Girls, Wives, Factory Lives. Macmillan. London.