Automation Will Create More Jobs, But Most Will Be Shit

See Novara for the original.


In 2013, the academics Carl Frey and Michael Osbourne predicted that up to 47% of American jobs were at high risk of automation over the next two decades. Their paper provoked automation hysteria, reflected in pronouncements of the “second machine age”, “fourth industrial revolution” and “industry 4.0”. Seven years on, the reality appears far less dramatic.

Predictions of mass job losses as a result of automation have, it transpires, greatly overstated the job-destructive impact of new technologies, and understated their ability to augment existing jobs and create new ones. 

Critics of the so-called “automation discourse” point to the stagnant productivity and lowered labour demand. However, not only does slowing productivity and growth alone fail to explain low labour demand; they tell us virtually nothing about the actual qualitative effects of automation on work and society. Up until the coronavirus pandemic, there had not been an overall fall in labour demand. Despite the fallout from the global financial crisis, from 2010 and 2015 total employment among member states of the Organisation for Economic Co-operation and Development (OECD) grew by 4.9% or roughly 27 million jobs. Canada, Mexico and the United States added 12.9 million jobs; the European Union, 3.6 million. While increases in labour demand have slowed in the EU, they have risen significantly in non-EU economies.

Nor should we be surprised by the sluggishness of productivity, increases in which generally require the diffusion of new technologies, which takes time; computers took nearly 25 years to reach their plateau of 5% of non-residential equipment capital. Recent OECD analysis shows that the pace of digital transformation varies hugely between countries and sectors: firms in the real estate, hospitality and construction industries are investing in software and other information and communication technologies (ICT) much less intensively than advertising, finance and law, for example. Just as diffusion varies across sectors, there are significant differences in digitalisation among firms of different sizes. On average, large firms are more digitally intensive than their small and medium-sized counterparts, even in the same sector. For example, 28% of large firms utilise big data analysis, while only 16% of medium-sized and 9% of small firms do so. The divergence in the uptake in digital technologies corresponds to a divergence in productivity between leading and lagging firms.

If we are not facing a jobs apocalypse from overproduction or automation, what impact is technological change actually having on work? Much the same as it has always had. While computers have automated tasks since the 1950s, very few of the 271 occupations listed in the 1950 US census no longer existed in 2010 (though some were grouped differently). Labour demand in some occupations declined due to both technological obsolescence (telegraph operators to typewriter servicers) and full automation (elevator operators); yet very few occupations were destroyed. What this shows is that automation tends to be partial, changing the task composition of most jobs rather than eliminating them entirely. 

The management consultancy research that has dominated “automation discourse” tend to portray this in a positive light, assuming that routine, low-skilled tasks will be replaced by high-skilled, high-quality jobs. However, the idea that technological change favours high-skilled labour obscures a more complex relationship between technology and skill. While AI- and data-driven digital automation in advanced capitalist countries have generally created more higher-skilled and higher-income professional jobs, they have also created more in-person service jobs that require lower levels of education and training. Meanwhile, routine, rule-based, middle-income work has been automated leading to job polarisation, hollowing out the demand for middle-skilled jobs. Evangelists of the “fourth industrial revolution” tend to ignore the negative effects technological change can have on workers, particularly those at the bottom of the pile.

The transformational impact of automation on jobs has more to do with changes to the task composition of jobs and the quality of work. Most jobs are comprised of a variety of tasks with varying degrees of susceptibility to automation. Many include a significant component of non-automatable tasks; for example, while AI can outperform humans at large-scale image analysis, these technologies often fail miserably at tasks that require common sense or emotional intelligence. 

An automation discourse that focuses on a workless future is a red herring. In the long term, the crisis isn’t one job destruction – it’s job transformation, and of a kind that is detrimental to (particularly low-income) workers. It’s the incentivisation of cost-cutting approaches to labour, including deskilling, surveillance and intensification, all of which result in lower worker discretion and poorer quality of work. Low-income and lower-skilled workers will bear the brunt of such changes as they are at the highest risk of automation displacement. If they lose their jobs, they will unlikely be able to retrain, and so will move into other similarly routine jobs, which they will lose again.

The pandemic will likely only accelerate these trends. Managers have been forced to adopt platform technologies and remote working they have historically resisted. As a result, the pandemic has accelerated the digitisation of all sectors, opening up an unprecedented threat to workers from datafication, digital automation and surveillance technologies. At the same time, the labour market has been decimated both in the US and EU, with real unemployment rates nearly double the official statistics. The full effects of such technological changes will reflect how power is distributed in society and thus depend as much on the increasing sophistication of automation as on the actions of workers or policymakers. In this context, it is all the more imperative to end the proliferation of low-waged work, increase social protections, invest in social infrastructures and fight for collective ownership of the means of production, platforms and all.

Tribune Article: Why Silicon Valley Loves Coronavirus

The coronavirus is an exogenous shock to the global economy, causing panic in the financial markets, a jobs apocalypse and an unprecedented crisis in health services. At the same time, the necessary safety measures are challenging the very nature of work and human sociality. Social distancing and lockdowns have been implemented around the world and in many countries, enforced by the state with a militaristic strictness. While Boris Johnson waxes Churchillian indulging in his “wartime” fantasy, the UK is facing a social catastrophe that goes beyond the economy. In the short term, there will be serious losses throughout many industries as many of their clients scale down or go bankrupt and we enter a recession. But in the long-term, we will see changes in the nature of work and human sociality that seem closer to the brave new worlds of science fiction than our pre-coronavirus reality. The tech giants are likely celebrating and the key to their success will be our quarantine. A different world will emerge as the economy recovers, a world where technology mediates a far greater proportion of our lives than any Silicon Valley ideologue could have dreamed was possible previously.

Read the rest of this article on the Tribune site.

Intelligent Machines: a brief history (Parts 1-3)

Below is a series of three blogs (part 1, part 2, part 3) I wrote for Autonomy last year on the history of intelligent machines. This serves as an introduction to anyone curious about artificial intelligence and how it might shape the future of digital automation in work and society more generally.

AI 1Introduction

The notion of what constitutes intelligence and therefore what constitutes an intelligent machine has been widely debated throughout the history of Western thought. Descartes’ mind-body dualism, Marx’s humanist distinction between the intentionality of an architect versus the functionality of bee, and Allen Newell and Herbert Simon’s ‘Physical Symbol System’ hypothesis, which argued that any representational system “has the necessary and sufficient means for general intelligent action”, are just a few examples. Stories of something approximating an intelligent machine go back to the eighth century BCE in Homer’s Iliad. These self-moving machines or ‘automata’ were made by Hephaestus, the god of smithing, and were servants “made of gold, which seemed like living maidens. In their hearts there is intelligence, and they have voice and vigour”.[i] In De Motu Animalium, Aristotle essentially conceived of planning as information-processing.[ii] In developing ontology and epistemology he also arguably provided the bases of the representation schemes that have long been central to AI.[iii] The first edition of Russell and Norvig’s famous text Artificial Intelligence: A Modern Approach [iv] even shows the notation of Alice in Wonderland author Lewis Carroll[v] on Aristotle’s theory of the syllogism – the basis for logic-based AI – on the cover.

From Descartes to Turing

The idea that we can test machinic intelligence is nearly as old as the concept of intelligent machines. Writing in 1637, Descartes proposed two differences that distinguish human from machine in a way that is much more demanding than the Turing Test (see below):

If there were machines which bore a resemblance to our body and imitated our actions as far as it was morally possible to do so, we should always have two very certain tests by which to recognise that, for all that, they were not real men”.[vi]

The first test imagines a machine’s “being” established such that it can “utter words, and even emit some responses to action on it of a corporeal kind, which brings about a change in its organs”. However, this machine cannot yet fully produce speech such that it could “reply appropriately to everything that may be said in its presence”. This is essentially the criteria for many contemporary artificial intelligences. The second test concerns situations in which machines can “perform certain things as well as or perhaps better than any of us can do”, yet fall short in others, which means that they did not “act from knowledge”, but rather only from “the disposition of their organs”. An intelligent machine can only pass both of Descartes’ tests if it has a functionality that is beyond a narrowly defined intelligence such that it has the capacity for knowledge. It must understand any given question enough to answer beyond programmed responses. This leads to the conclusion that it is “impossible that there should be sufficient diversity in any machine to allow it to act in all the events of life in the same way as our reason causes us to act”[vii].

Intelligent machines that approximate human understanding have yet to be produced. However, intelligent machines of a narrower type have existed – first virtually, then in reality – since Charles Babbage’s Analytical Engine of 1834. This machine was designed to use punch cards (an early form of computation) and could perform operations based on the mathematization of first-order logic. The Countess of Lovelace Ada Byron King – popularly known as Ada Lovelace – worked with Babbage and prophesised the implications of the algorithms that underpinned it. We can think of algorithms as a type of virtual machine or an “information-processing system that the programmer has in mind when writing a program, and that people have in mind when using it”[viii]. Ada Lovelace theorised virtual machines that formed the foundations of modern computing, including stored programs, feedback loops and bugs among other things. She also recognised the potential generality of such a machine to represent nearly “all subjects in the universe”, predicting that a machine “might compose elaborate and scientific pieces of music of any degree of complexity or extent”, though she could not say how[ix].

Advancements in mathematics and logic allowed for a breakthrough in 1936, when Alan Turing showed that every possible computation can in principle be performed by a mathematical system. This is now called a Universal Turing Machine[x]. Turing spent the next decade codebreaking at Bletchley Park during World War II and thinking about how this virtual machine could be turned into an actual physical machine. He helped design the first modern computer, which was completed in Manchester in 1948. Turing is usually credited with providing the theoretical break that led to modern computation and AI. In an unpublished paper from 1947, Turing discusses “intelligent machines”. A few years later Turing publishes his famous paper in which he asks, “Can a machine think?” and argues that machines are capable of intelligence. To make his case, he first constructs an “imitation game” or what is now known as the “Turing Test”, which continues to influence popular debates about AI[xi]. The test involves three people – a man (A) and a woman (B) who communicate through typescript with an interrogator (C) in a separate room. The interrogator aims to determine which of the other two is the man and which is the woman. Turing argue that the question “What will happen when a machine takes the part of A in this game?” should replace the original question “Can a machine think?”. The failure to distinguish between machine and human indicated the intelligence of the machine. Turing then goes on to consider nine different objections which form the classical criticisms of artificial intelligence. One of the most enduring is ‘Lady Lovelace’s Objection’, in which she argues that computers have “no pretensions to originate anything. It can do whatever we know how to order it to perform”[xii]. However contemporary “expert systems” and “evolutionary” AI have reached conclusions unanticipated by their designers[xiii]. Interestingly, a machine with a set of responses that happen to perfectly fit the questions asked by a human would pass a Turing test, but not pass Descartes’ test.

From Russell to MINDER

Following the innovations of Turing and Lovelace, the advancement of intelligent machines picks up speed from the 1950s into the 1970s in large part to three developments: Turing’s work, Bertrand Russell’s propositional logic and Charles Sherrington’s theory of neural synapses. In a famous paper titled “A Logical Calculus of the Ideas Immanent in Nervous Activity,” the neurologist and psychiatrist Warren McCulloch and the mathematician Walter Pitts combined the binary systems of Turing, Russell and Sherrington by mapping the 0/1 of individual states in Turing machines onto the true/false values of Russell’s logic, onto the on/off activity of Sherrington’s brain cells.[xiv] During this time a number of different proto-intelligent machines were built. For example, a Logic Theory Machine proved eighteen of Russell’s key logical theorems and even improved on one of them. There was also the General Problem Solver (GPS) machines, which could apply a set of computations to any problem that could be represented according to specific categories of goals, sub-goals, actions and operators.[xv] At the time, these intelligent machines relied almost exclusively on formal logic and representation, which dominated the early development of computing. Margaret Boden terms this type of artificial intelligence “Good Old-Fashioned AI” or GOFAI.

The binary systems synthesised by McCulloch and Pitts helped to catalyse the embryonic cybernetics movement, which emerged alongside the symbolic/representational paradigm discussed above. Cybernetics was coined in 1948 by Norbert Wiener, an MIT mathematician and engineer who developed some of the first automatic systems. Wiener defined cybernetics as “the study of control and communication in the animal and the machine.”[xvi] Cyberneticians examined a variety of phenomena related to nature and technology including autonomous thought, biological self-organisation, autopoiesis and human behaviour. The driving idea behind cybernetics was the idea of the feedback loop or “circular causation”, which allows a system to make continual adjustments to itself based on the aim it was programmed to achieve. Such cybernetic insights were later applied to social phenomena by Stafford Beer to model management processes among others. Wiener and Beer’s insights were used in Project Cybersyn – a pathbreaking method of managing and planning the Chilean national economy under the presidency of Salvador Allende from 1971-73.[xvii] However, as AI gained increasing attention from the public and government funding bodies, there began to be a split between two paradigms – the symbolic/representational paradigm which studied mind and the cybernetic/connectionist paradigm which studied life itself. The symbolic/representational paradigm came to dominate the field.

There were numerous theoretical and technological developments from the 1960s through to the present that provided the foundations for the range of intelligent machines that we rely on today. One of the most important was the re-emergence in 1986 of parallel distributed processing, which formed the basis for artificial neural networks, a type of computing that mimics the human mind. Artificial neural networks are comprised of many interconnected units that are each capable of computing one thing; but instead of computing sequential instructions based on top-down instructions given by formal logic, they use a huge number of parallel processes, controlled from the bottom up based on probabilistic inference. They are the basis for what is called “deep learning” today. “Deep learning” uses multi-layer networks and algorithms to systematically map the source of a computation, thus allowing it to adapt and improve itself. Another important development was Rosalind Picard’s ground-breaking work on “affective computing”, which inaugurated the study of human emotion and artificial intelligence in the late 1990s.[xviii] Marvin Minsky also influenced the incorporation of emotion into AI in considering the mind as a whole, inspiring Aaron Sloman’s MINDER program in the late 1990s.[xix] MINDER indicates some ways in which emotions can control behaviour, scheduling competing motives. Their approaches also inspired more recent hybrid models of machine consciousness such as LIDA (Learning Intelligent Distribution Agent), by researchers led by Stan Franklin.[xx]

What puts the ‘intelligence’ in Artificial Intelligence?

Today there are many different kinds of intelligent machines, with many different applications. In 1955, the study of intelligent machines is essentially rebranded as “artificial intelligence” via a conference at Dartmouth College during the summer of 1956.[xxi] In the proposal for the conference, the authors state that “a truly intelligent machine will carry out activities which may best be described as self-improvement”.[xxii] However, a single definition of artificial intelligence is difficult to adhere to, especially in a field rife with debate. For perspective, Legg and Hutter provide over seventy different definitions of the term.[xxiii] It has been variously described as the “art of creating machines that perform functions that require intelligence when performed by people”,[xiv] as well as “the branch of computer science that is concerned with the automation of intelligent behaviour”.[xv] One of the best definitions comes from the highly influential philosopher and computer scientist Margaret Boden: “Artificial intelligence (AI) seeks to make computers do the sorts of things that minds can do”.[xvi] Within this definition, Boden (2016, p. 6) classifies five major types of AI, each with their own variations. The first is classical, or symbolic “Good Old-Fashioned AI” (GOFAI mentioned in a previous post), which can model learning, planning and reasoning based on logic; the second is artificial neural networks or connectionism, which can model aspects of the brain, recognise patterns in data and facilitate “deep learning”; the third type of AI is evolutionary programming, which models biological evolution and brain development; the last two types, cellular automata and dynamical systems, are used to model development in living organisms.

None of these types of AI can currently approximate anything close to human intelligence in terms of general cognitive capacities. A human level of AI is usually referred to as artificial general intelligence or AGI. AGIs should be capable of solving various complex problems in various different domains with the ability of autonomous control with their own thoughts, worries, feelings, strengths, weaknesses and predispositions (Goertzel and Pennachin, 2007). The only AI that exists right now is of a narrower type (often called artificial narrow intelligence or ANI), in that its intelligence is generally limited to the frame in which it is programmed. Some intelligent machines can currently evolve autonomously through deep learning, but these are still a weak form of AI relative to human cognition. In an influential essay from the 1980s, John Searle makes the distinction between “weak” and “strong” AI. This distinction is useful in understanding the current capacities of AI versus AGI. For weak AI, “the principal value of the computer in the study of the mind is that it gives us a very powerful tool”; while for strong AI “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states”.[xvii] For strong AI, the programs are not merely tools that enable humans to develop explanations of cognition, the programs themselves are essentially the same as human cognition.

The Prospect of General Intelligence

While we currently do not have AGI, investment in ANI is only increasing and will have a significant impact on scientific and commercial development. These narrow intelligences are very powerful, able to perform a huge number of computations that would in some cases take humans multiple lifetimes. For example, some computers can beat world-champions in popular games of creative reasoning such as chess (IBM’s Deep Blue in 1997), Jeopardy (IBM’s Watson in 2011), and Go (Google’s AlphaGo in 2016). The Organisation for Economic Co-operation and Development [OECD], found that private equity investments in AI start-ups have increased from just 3% in 2011 to roughly 12% worldwide in 2018.[xviii] Germany is planning to invest €3 billion in AI research between now and 2025 to help implement its national AI strategy (“AI Made in Germany”), while the UK has a thriving AI startup scene and £1 billion of government support.[xxix] The USA had US$5 billion of AI investments by VCs in 2017 and US$8 billion in 2018.[xxx] The heavy investment in ANI start-ups and the extremely high valuations of some of the leading tech companies funding AGI research might lead to an artificial general intelligence in the coming years.

Achieving an artificial general intelligence could be a watershed moment for humanity and allow for complex problems to be solved at a scale once unimaginable. However, the rise of AGI comes with significant ethical issues and there is a debate as to whether AGI would be a benevolent or malevolent force in relation to humanity. There are also people who fear such developments could lead to an artificial super intelligence (ASI), which would be “much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills”. [xxxi] With an increasingly connected world (referred to as the internet of things) artificial super intelligences could potentially “cause human extinction in the course of optimizing the Earth for their goals”.[xxxii] It is important, therefore, that humans remain in control of our technologies to use them for social good. As Stephen Hawking noted in 2016, “The rise of powerful AI will be either the best or the worst thing ever to happen to humanity. We do not yet know which”.

Endnotes

[i] Homer, 1924. The Iliad. William Heinemann, London. pp. 417–421

[ii] Aristotle, 1978. Aristotle’s De motu animalium. Princeton University Press, Princeton.

[iii] Glymour, G., 1992. Thinking Things Through. MIT Press, Cambridge, Mass.

[iv] Russell, S.J. and Norvig, P., 2010. Artificial intelligence: a modern approach, 3rd ed. Pearson Education, Upper Saddle River, N.J;Harlow;

[v] Carroll, L., 1958. Symbolic logic, and, The game of logic : (both books bound as one), Mathematical recreations of Lewis Carroll. Dover, New York.

[vi] Descartes, R., 1637, 1931. The philosophical works of Descartes. Cambridge University Press, Cambridge.

[vii] Ibid., p. 116

[viii] Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. p. 4

[ix] Lovelace, A.A., 1989. Notes by the Translator (1843), in: Hyman, R.A. (Ed.), Science and Reform: Selected Works of Charles Babbage. Cambridge University Press, Cambridge, pp. 267–311.

[x] Turing, A.M., 1936. “On Computable Numbers with an Application to the Entscheidungsproblem,” Proceedings of the London Mathematical Society, Series 2, 42/3 and 42/4., in: Davis, M. (Ed.), The Undecidable: Basic Papers on Undecidable Propositions, Unsolvable Problems, and Computable Functions. Raven Press, Hewlett, NY, pp. 116–53.

[xi] Nisson, N., 1998. Artificial Intelligence: A New Synthesis. Morgan Kaufmann, San Francisco.

[xii] Lovelace, A.A., 1989. Notes by the Translator (1843), in: Hyman, R.A. (Ed.), Science and Reform: Selected Works of Charles Babbage. Cambridge University Press, Cambridge, pp. 303.

[xiii] See Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. See also Luger, G.F., 1998. Artificial intelligence : structures and strategies for complex problem solving. England, United Kingdom.

[xiv] Mcculloch, W.S., Pitts, W., 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5, 115–133. https://doi.org/10.1007/BF02478259

[xv] See Newell, A., Simon, H., 1956. The logic theory machine–A complex information processing system. IRE Transactions on Information Theory 2, 61–79. https://doi.org/10.1109/TIT.1956.1056797. See also Simon, H.A., Newell, A., 1972. Human problem solving / Allen Newell, Herbert A. Simon, Human problem solving / Allen Newell, Herbert A. Simon. Prentice-Hall, Englewood Cliffs, N.J.

[xvi] Wiener, N., 1961. Cybernetics : or, Control and communication in the animal and the machine, Second edition. ed. M.I.T. Press, New York.

[xvii] Medina, E., 2014. Cybernetic revolutionaries : technology and politics in Allende’s Chile. The MIT Press, Cambridge.

[xviii] Picard, R.W., 1997. Affective computing. MIT Press, Cambridge, Mass.

[xix] Minsky, M., 2006. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. Simon & Schuster, Riverside.

[xx] Baars, B.J., Franklin, S., 2009. CONSCIOUSNESS IS COMPUTATIONAL: THE LIDA MODEL OF GLOBAL WORKSPACE THEORY. International Journal of Machine Consciousness 1, 23–32. https://doi.org/10.1142/S1793843009000050

[xxi] McCarthy, J., Minsky, M.L., Rochester, N., Shannon, C.E., 2006. A proposal for the Dartmouth summer research project on artificial intelligence: August 31, 1955. AI Magazine 27, 12.

[xxii] Ibid., p 14

[xxiii] Legg, S., Hutter, M., 2007. Universal Intelligence: A Definition of Machine Intelligence.(Author abstract)(Report). Minds and Machines: Journal for Artificial Intelligence, Philosophy and Cognitive Science 17, 391. https://doi.org/10.1007/s11023-007-9079-x

[xxiv] Kurzweil, R., 1990. The age of intelligent machines. MIT Press, London;Cambridge, Mass

[xxv] Luger, G.F., 1998. Artificial intelligence: structures and strategies for complex problem solving. England; p. 1.

[xxvi] Boden, M.A., 2016. AI : Its Nature and Future. OUP, Oxford. p.1.

[xxvii] Searle, J.R., 1980. Minds, brains, and programs. Behavioral and Brain Sciences 3, p. 417. https://doi.org/10.1017/S0140525X00005756

[xxviii] OECD, 2018. Private Equity Investment in Artificial Intelligence (OECD Going Digital Policy Note). Paris.

[xxix] Deloitte, 2019. Future in the balance? How countries are pursuing an AI advantage (Insights from Deloitte’s State of AI in the Enterprise, No. 2nd Edition survey). Deloitte, London.

[xxx] Ibid.

[xi] Bostrom, N., 2006. How Long Before Superintelligence? Linguistic and Philosophical Investigations 5, p.11.

[xii] Yudkowsky, E., Salamon, A., Shulman, C., Nelson, R., Kaas, S., Rayhawk, S., McCabe, T., 2010. Reducing Long-Term Catastrophic Risks from Artificial Intelligence. Machine Intelligence Research Institute. p. 1

Autonomy and Automation: Work in the 21st Century (Zed series)

ZED1Last year I was contacted by Kim Walker at Zed books to put together a book series in collaboration with Autonomy. I’m pleased to say that our first book  by Mark Bergfeld is nearing completion now and we have 4 more on the way.  We hope to build this momentum throughout 2020. With that in mind, the open call for proposals is below:

The labour market has been hollowed out and the future of work lies in the shadow of political crises. Many have argued that this growing social polarisation is driven by technological change. However, research has not kept up with the speed at which these changes are occurring. Social protections for workers are being eroded across the globe and technology is arguably catalysing this trend. Historically, the loss of employment in one industry has been more than offset by the expansion of employment in other industries. Yet, this employment tends to emerge under different social conditions. Research has often neglected how new technologies have catalysed exploitation, rather than helped workers overcome it.

This new series in collaboration with Zed aims to explore the rapidly changing nature of existing jobs as well as the variety of emergent occupations in new sectors. It takes technological change neither as an inherently liberatory force nor as an inherently constraining force, but rather as a function of social relations. Topics could include anything from sociological analysis of particular technical changes in industry such as the introduction of platforms and AI, to detailed ethnographies of particular experiences of workers themselves such as those of migrant carers, delivery drivers or freelance coders.

Proposals: We solicit academics and non-academics to write punchy, trade-oriented, 30-40,000 word books on the above themes. Proposals should have the weight and rigour of academic thought, yet be accessible to a general audience.

If you are interested in submitting a proposal for the series, please contact M.Cole@leeds.ac.uk for further information.

Series editors:

Matt Cole – Post-Doctoral Fellow in Work and Employment, University of Leeds

Kendra Briken – Chancellor’s Fellow in Work and Employment, Strathclyde University

Will Stronge – Co-director of Autonomy

An update

I thought I should update the internet on my work.

I’m currently working on two journal articles – one related to labour process theory and another related to wage theft in the hospitality industry.

I’m also working on some blogs and research on AI for Autonomy (a think tank on work and its futures).

My current research is looks at the tensions between human and machine intelligence in interactive  service work, focusing on platforms specifically.

There is also a book series in the works, which has not been announced yet.

Conference Season is approaching, which means I’ll be presenting at SASE in New York, BUIRA in Newcastle, and IIPPE in Lille.

On Exploitation

I’m very happy to have my first book contribution published in the Bloomsbury Companion to Marx.

marx
The entry introduces the Marxist conception of exploitation and summarises some key debates over the last century. I highly recommend ordering a copy for your university library or for personal reference. The text is below.

Exploitation

Matt Cole

Introduction

The concept of exploitation has a rich history in Marxist as well as non-Marxist political, economic and social theory. These multiple, and sometimes conflicting definitions, often rely on different assumptions concerning power, labor and economics generally. Etymologically, the modern term for exploitation1 emerged in the early-nineteenth century and referred to the ‘productive working’ of something. Generally, the word had a positive connotation among those who first used it; however, it later developed negative connotations during the 1830s to 1850s due to the influence of French socialists like Saint Simon and Charles Fourier. Marx was likely influenced by this negative conception of exploitation when he began his study of classical political economy (Adam Smith, David Ricardo, James Mill, etc.) while he lived in Paris from 1843 to 1845. Like Marx, I will focus primarily on the economic dimensions of exploitation because these form what he understood as the material foundation for social relations. The following will explain Marx’s conception of exploitation; contextualize its development within both historical debates and the development of Capital itself; and conclude with the political implications of the concept. The final point is the most important; as without a proper understanding of exploitation, there is no possibility of truly overcoming it.

To understand Marx’s concept of exploitation, it is first necessary to understand his conception of the development of the capitalist mode of production. A mode a production is made up of forces and relations of production, which codetermine one another. Forces of production are the range of possible means, determined by knowledge, science and technology. Relations of production are determined by the prevailing patterns of property or class relations. The classical Marxian periodization of modes of production and exploitation of labor follows the pattern: slave, feudal, and then capitalist (see Banaji 2010). The slave mode of production relies on private property in people and non-labor means of production; the feudal mode of production relies on private property in land and non-labor means of production; and the capitalist mode of production relies on private property in non-labor means of production and land. In pre-capitalist modes of production, exploitation was directly mediated through the appropriation of the immediate surplus product, and labor was formally coerced. In the capitalist mode of production, by contrast, exploitation is abstracted through economic relations and labor is, formally, free, which is to say that workers can sell their labor as they see fit.

 

Over the course of the three volumes of Capital, Marx develops two aspects of exploitation: ‘primary exploitation’, which takes place in the production process itself and ‘secondary exploitation’, which takes places outside of the production process and requires the capitalist’s mastery and advantage, based on property ownership. The former can be productive of surplus value, which is translated into profits through the market and competition. The latter is essentially an antediluvian form of accumulation and operates through appropriation of the former’s surplus or ‘profit upon alienation’. Understanding the relation between these two types of exploitation is essential in order to grasp Marx’s critique of political economy.

 

<H1>Primary Exploitation</H1>

 

 

Primary exploitation is the human and social process of ‘exploitation of the workman’ by the capitalist, which relies on a classed monopoly of power over the means of industrial production. It relies on the extraction of surplus value. Value is the representation of abstract homogenized labor, which emerges in the process of exchange and is measured by money. The rate of surplus value extracted in the labor process is ‘an exact expression for the degree of exploitation of labor-power by capital, or of the worker by the capitalist’ (Marx 1990: 326). However, this does not mean that rate of surplus value is an expression for ‘absolute magnitude’ of exploitation, because not all exploited labor produces surplus value.

For Marx, some labor is productive of value, while some is non-productive. In most Marxian economics, the distinction between productive and non-productive labor is central (see Foley 1986; Shaikh and Tonak 1994; Mohun 1996). Activities such as trade, financial services and advertising are not socially productive of value, yet firms who carry out these activities nonetheless exploit workers. The rate of primary exploitation for both productive and nonproductive workers is the ratio of necessary labor time (the average annual consumption per worker in the sector) to surplus labor time (the excess of working time over necessary labor time). It important to note that, for Marx, capitalists cannot be exploited. The work of the capitalist appears as a labor process in its own right, however it is the labor of exploitation rather than exploited labor. The wages of managers, just as the incomes of capitalists are, as Marx notes in Capital Vol. 3, ‘precisely the quantity of others’ labor that is appropriated, and depends directly upon the rate of exploitation of this labor’ (Marx 1992: 511).

Secondary Exploitation

At the most abstract level, aggregate profit is essentially the monetary expression of aggregate surplus value; however, companies can also generate profit through purely redistributive techniques, taking advantage of the dynamics of circulation between social spheres. These profits come from what Marx terms secondary exploitation, or profit upon alienation. Secondary exploitation is mediated through financial and property relations that ensure the collection of interest payments, rents or profits through unequal exchange (merchant’s capital). This aspect of exploitation extracts and redistributes a portion of the total surplus value of society. The existence of secondary exploitation allows for two things: first, it explains how capitalism can profit from non-capitalist spheres without the creation of new value; and, second, it allows Marxian economics to account for the difference between the sum of profits and the sum of surplus values that emerges as values are transformed into prices (see Shaikh and Tonak 1994).

Marx defines secondary exploitation in volume three of Capital as an essentially archaic form of accumulation. This dynamic persists in those branches of industry that have not transitioned to the modern mode of production. In this mode of exploitation, money and means of production, such as tools, software, appliances, machinery, cars and business premises, are loaned in kind. These represent a specific sum of money and the borrower must not only pay interest, but also the price for wear and tear which arises from the use-value of the items. Usury, trade and finance, exploit a given mode of production without reproducing it and thus relate to the mode of production from the outside. Usurer’s capital, for example, ‘has capital’s mode of exploitation without its mode of production’ (Marx 1992: 732). The primary distinction that should be made in terms of the form of accumulation is whether these means of production are loaned to immediate producers, which presupposes a non-capitalist mode of production, or whether they are loaned to industrial capitalists, which presupposes a capitalist mode of production. Both are forms of secondary exploitation.

Debates

Defining the precise role of exploitation in production and capitalism has been the source of considerable debate in the history of economics and political economy. From the late-ninetieth to late-twentieth century, the debates were largely between two distinct paradigms: the Marxist and the neo-classicalist (influenced by the Austrian and Lausanne Schools). Marxist thinkers asserted the centrality of value and exploitation in capitalism as a mode of production. They viewed the labor processes of capitalism, from the satanic mills to the penthouses of haute finance, as a totalizing system. Neo-classicalist thinkers, by contrast, typically denied the existence of exploitation, largely through omission of the labor process as a social phenomenon. They tended to flatten social phenomena to fit mathematical models and relied on the anomalous assumptions of Walrasian marginal equilibrium conditions, such that all market exchanges are perfectly competitive, yet reciprocal and voluntary. In sum, Marxists denied the possibility of capitalism persisting without exploitation, while neo-classicalists deny the possibility of a persistently exploitative system

During the early1980s the stark divisions between Marxist and Neoclassicalist approaches to economics and exploitation began to soften as Marxists were influenced by neo-Ricardian and Sraffian economics. Two overlapping tendencies of Marx-inspired thought emerged, which were called the neo-Ricardian Marxists and the analytical Marxists, respectively. Both tendencies shared a rejection of Marx’s labor theory of value as a foundation for the Marxian theory of exploitation. Neo-Ricardian Marxists were theoretically indebted to Cambridge economists Maurice Dobb, Piero Sraffa, and Joan Robinson, among others. Sraffa shared Ricardo’s so-called ‘corn theory of value’ or the idea that one can measure the rate of profit as a share of any particular commodity. Ian Steedman used the Sraffian approach to argue that a neo-Ricardian framework is a superior system and method compared to Marx’s when analyzing a range of issues involving prices and production under capitalism. Steedman and other neo-Ricardians claimed that, since magnitudes in terms of values tend to differ from those in terms of price, Marx’s labor theory of value must be abandoned (Steedman 1977: 205–07). This position influenced other analytical Marxists (see Roemer 1982), but also elicited strong criticisms from those who retained value-informed approach (see Himmelweit and Mohun 1981; Shaikh 1981). The latter’s main criticisms were that they conceptually flattened all labor process relations into money relations. The ideological roots of the series of concepts that neo-Ricardians relied on – equilibrium, profit as cost, and perfect competition – limited their analytical capacity to understand exploitation. Marxists claimed that Steedman and the neo-Ricardians could not accommodate social dimensions of the labor process or the relative autonomy of value and prices relations.

The analytical Marxists (or ‘rational choice’ Marxists), included scholars such as John Roemer, John Elster and G.A. Cohen. Roemer in particular argued that Marxian economics, particularly the notion of exploitation, should be able to be derived from Walrasian axioms of market equilibrium, perfect competition, full employment, etc., and should use neoclassical methodological assumptions such as the rational individual and normative preferences. This was intended to make Marx more palpable to the mainstream. This ‘simpler Marxian Argument’ claimed that, because labor is the singular human element in production that generates the commodity, and because people who own means of production and do not labor in production control some of the revenues from the sale of the commodity, people who labor are exploited by those who do not (Cohen 1979). This led them to conclude ‘the relationship between the labor theory of value and the concept of exploitation is one of mutual irrelevance’ (Cohen 1979: 338; see also Steedman 1981). Marxian political economists responded with wide-ranging criticisms of the analytical approach. For example, they argued that Roemer’s approach fails as a result of ignoring the distinction between labor and labor power (Lebowitz, 1988); that the logic of a non-dialectical approach necessarily fails to grasp Marx’s theory of exploitation (Smith 1989); that Roemer’s analysis must be rejected because it cannot account for the emergence of class consciousness (Anderson and Thompson 1988); and that it’s reliance on Walrasian foundations is idealistic and ahistorical (Dymski and Elliot 1989).

Developments

The main criticisms of Marx by neoclassical economists were primarily influenced by Böhm-Bawerk’s Karl Marx and the Close of His System (1896). Böhm-Bawerk claims Marx’s fundamental error is that the labor theory of value set out in Capital Vol. I contradicts the theory of the rate of profit and prices of production set out in volume three. Most subsequent critiques of Marx have made similar arguments. For example, Joan Robinson in An Essay on Marxian Economics, argued that there was a contradiction between Marx’s assumptions in volume one, namely that a rising labor productivity leads to a rising rate of exploitation, and those assumptions in volume three, i.e. if the rate of exploitation remains stable, rising labor productivity could lead to a rising rate of real wages and a declining rate of profit. However, a careful analysis shows that these critiques stem from the failure to recognize the fact that Capital, volumes one and volume three, are at different levels of abstraction, make different assumptions and address different questions (see Hilferding 1949; Kay 1979; Mandel 1990).

Marx does not begin Capital volume one with a ‘labor theory of value’ prior to market relations, but, rather, with an analysis of the commodity. Marx suspends all other differences in terms of production conditions, competition, interest, prices etc. in order to examine concrete heterogeneous labor in the production of commodified ‘useful’ things. He does this to make the commodification of labor power (the capacity to labor) explicit. In volume one, only the ‘law of value’ matters. Marx restricts his analysis based on the assumption that the total profit available for capitalists is purely limited to the amount of surplus value appropriated from workers, and that the average rate of profit for the entire economy is simply the ratio of total surplus value to total value. This serves to make exploitation in the labor process transparent, and designate it as the specifically capitalist type of exploitation. Unlike exploitation under feudalism, where exploitation is transparent, personal and direct, specifically capitalist (primary) exploitation is indirect and socially mediated through commodity relations in the market.

Whether an individual exploits or is exploited depends on the nature and price of the commodified object and the actual activity performed by that person. In volume one, capitalists are deemed to exploit workers collectively through the impersonal domination of the market, yet Marx’s analysis is limited to the fact that capitalists are only able to accumulate as much surplus as they are individually able to extract from their workers. Throughout volumes two and three, Marx progressively removes the assumptions of the first volume, so that industrial capitalists no longer must trade and distribute on their own behalf, rely on their own financial means, or use their own land. Trading can be undertaken by commercial capitalists and banking by money or finance capitalists, allowing a variety of types of assets to be incorporated into commodity relations. Capitalists’ capacity to accumulate is no longer restrained by their assets. This effectively transforms the means of production into the property of the capitalist class as a whole. Property-based class relations are thusly generalized to the entire social system. Marx also introduces additional variables such as cost price, fixed capital and circulating capital that effect the rate of surplus value and the rate of profit.  As a consequence of Marx’s increasingly complex analysis, the dynamics of price and profit end up obscuring the primary exploitation of surplus value. However, these different levels of abstraction are necessary because, as Marx notes, if he simply started from the calculation of the rate of profit, he would have never been able to ‘establish any specific relationship between the excess and the part of capital laid out on wages’ (1992: 138).

 Conclusion

The political implications of using Marx’s concept of exploitation cannot be understated. Exploitation is the basis for the production of surplus value, which forms a link between labor process relations in production and money relations in the market. Labor process relations, which correspond to concrete labor and the organization of production, manifest themselves in the struggle over intensity, time and interpersonal dynamics. The politics of the labor process are what Elson calls a ‘politics of production’, which concentrates on ‘trying to improve conditions of production; shorten the working day, organize worker resistance on the shop-floor; build up workers’ co-operatives, produce an alternative plan …’ such as co-operatives and socialist organization (1979, p. 172). Money relations, which can be defined as those relations directly mediated by the universal equivalent (money), correspond to abstract labor and become manifest in the struggle over the payment or non-payment of wages. The politics of money relations are what Elson calls a ‘politics of circulation’, which concentrate on changing distribution in a way that is advantageous to workers. For example, raising money wages, controlling money prices, regulating the financial system, establishing a welfare state and so on. Marx’s theory of exploitation offers a framework that unifies labor process and money relations and, in doing so, contains within it a politics that aims to move beyond capitalism. As Ernest Mandel pointed out: ‘The growth of the proletariat, of its exploitation and of organized revolt against that exploitation, are the main levers for the overthrow of capitalism’ (1990: 83).

Notes:

1 The root word ‘exploit’ comes from the late-fourteenth century French espleiten or esploiten, ‘to accomplish, achieve, fulfill’, from Old French esploitier, espleiter, ‘to carry out, perform, accomplish’ (Harper 2017).

 

 

References:

Anderson, W.H.L. and Thompson, F.W. (1988), ‘Neoclassical Marxism.’ Science & Society 52: 191–214.

Banaji, J. (2010), Theory as History. London: Brill.

Böhm-Bawerk, E. von. (1984), Karl Marx and the Close of His System, Philadelphia: Orion Editions.

Cohen, G.A. (1979), ‘The Labor Theory of Value and the Concept of Exploitation’, Philosophy & Public Affairs, 8: 338–60.

Dymski, G.A. and Elliot, J.E. (1989) Roemer vs. Marx: Should “Anyone” Be Interested in Exploitation?, Canadian Journal of Philosophy, Supplementary Volume 15: 333.

Elson, D. (1979), Value: the Representation of Labor in Capitalism, London: CSE Books.

Foley, D.K. (1986), Understanding Capital: Marx’s Economic Theory, Cambridge: Harvard University Press.

Harper, D. (2017), ‘Exploit’ Online Etymology Dictionary. http://www.etymonline.com/index.php?term=exploit&allowed_in_frame=0.

Hilferding, R. (1949), ‘Böhm-Bawerk’s Criticism of Marx, in: Sweezy, P. (Ed.), Karl Marx and the Close of His System and Böhm-Bawerk’s Criticism of Marx, 121–96, New York: August M. Kelley.

Himmelweit, S. and Mohun, S. (1981), Real Abstractions and Anomalous Assumptions, in Steedman, I. (ed.), The Value Controversy, 224–265, London: New Left Books.

Kay, G. (1979), ‘Why Labour is the Starting Point of Capital’, in Elson, D. (ed.), Value: The Representation of Labour in Capitalism, 46–6, CSE Books, London: CSE Books.

Lebowitz, M. (1988), ‘Is “Analytical Marxism” Marxism?’ Science and Society 52: 215–28.

Mandel, E. (1990), ‘Karl Marx’, in: Capital, Volume 1: A Critique of Political Economy, 11–86, Harmondsworth: Penguin in association with New Left Books.

Marx, K. (1990) Capital, Volume I: A Critique of Political Economy, Harmondsworth: Penguin in association with New Left Review,

Marx, K. (1992). Capital, Volume III: The Process of Capitalist Production, Harmondsworth: Penguin in association with New Left Review.

Mohun, S. (1996) ‘Productive and Unproductive Labor in the Labor Theory of Value’ Review of Radical Political Economics 28: 30–54.

Robinson, J. (1969), The Economics of Imperfect Competition, 2nd ed. New York: Palgrave MacMillan.

Roemer, J.E. (1982), A General Theory of Exploitation and Class, Cambridge MA: Harvard University Press.

Shaikh, A. (1981), ‘The Poverty of Algebra’, in Steedman, I. (ed.), The Value Controversy, 266–300, London: New Left Books. 266–300.

Shaikh, A. and Tonak, E.A. (1994). Measuring the Wealth of Nations: the Political Economy of National Accounts, Cambridge University Press: Cambridge.

Smith, T. (1989), Roemer on Marx’s Theory of Exploitation: Shortcomings of a Non-Dialectical Approach. Science & Society, 53: 327–340.

Steedman, I. (1981), The Value Controversy, London: New Left Books.

Steedman, I. (1977), Marx After Sraffa, London: New Left Books.

Joint Call for Papers from the Political Economy of Work and Social Reproduction Working Groups

IIPPE 9th Annual Conference in Political Economy

Pula, Croatia

Joint Call for Papers from the Political Economy of Work and Social Reproduction Working Groups

The Political Economy of Work and Social Reproduction Working Groups invite you to submit proposals for individual papers, themed panels or streams of panels related to our lines of inquiry. These may include theoretical and empirical contributions that focus primarily on the relationship between work and social reproduction. Previous IIPPE conferences have highlighted the clear overlaps and synergies in many contributions on the Political Economy of Work and on Social Reproduction. Our aim at the Pula conference is therefore to deepen and strengthen such synergies. In this spirit, we welcome contributions on the following themes:

  • The relationship between the productive and reproductive sphere
  • Feminist political economy of work
  • Gender, labour movements and capitalism
  • Conceptualising and measuring value in productive and reproductive work
  • Domestic labour, migration and global capitalism
  • The care crisis under capitalism
  • The political economy of time and time-use in relation to work and social reproduction
  • Empirically grounded discussions of concrete, abstract, private, social and caring labour
  • Emotional labour, value, and the political economy of production and social reproduction
  • Gender and inequality in the workplace: the gender pay gap, occupational segregation and other dimensions of gender and inequality in work
  • Debating the feminization of labour

We encourage the submission of panel proposals (consisting of up to four presentations) as an opportunity to showcase the work of study groups in greater depth than is possible in single presentations.

Papers and panel proposals can be submitted on iippe.org by 15 March 2018, ticking the Social Reproduction and/or Political Economy of Work Working Groups as part of your submission.

Hannah Bargawi

Matthew Cole

Sara Stevano

Experiential Commodities, Experiential Labour: Exploring the Service Labour Process in Hospitality Work

Experiential Commodities, Experiential Labour: Exploring the Service Labour Process in Hospitality Work

Experiential Commodities Experiential Labour PP

My research addresses the experiences of workers in the hospitality industry. It takes a political economic approach to studying the labour process, which entails an analysis grounded in the unity of production and valorisation. Ethnographies of work such as Diamond’s (1992) study of the industrial production of care and Sherman’s (2007) study of the industrial production of luxury attempt to incorporate a theory of service production into labour process analysis; however, they lack any conception of the capitalist labour process as one that relies on the production of surplus value. The absence of a social conception of the commodity fundamentally hinders the capacity to analyse work as the unification of money and labour relations in the process of exploitation. Without a theory grounded in a Marxist conception of value, we cannot understand the relations between the structural imperatives of accumulation and the immediate process of production. This distinctly social relation oversdetermines workers’ experiences of work. Understanding the production of experience as a commodity links the concrete labour of its production with the abstract labour that produces value in the capitalist mode of production. The research provides an empirical and theoretical approach to understanding the politics of hospitality work as specifically related to the nature of the service commodity and the tensions that arise within the valorisation process.

Economic contribution and sector composition

What is hospitality? The BHA defines it as, “the provision of accommodation, meals and drinks in venues outside of the home” to both UK residents and overseas visitors”. In the UK, the hospitality industry is the fourth largest industry by employment and has been the largest growth industry over the past decade. It employs around 3 million people in the UK (over a quarter million more than manufacturing). Since 2011, it has grown by 13%, more than double the employment growth of the economy overall. The hospitality industry also makes a significant economic contribution to the UK. The industry added an estimated £57 billion to the economy in 2014, roughly 4% of GDP. Yet in the context of this dramatic growth, working conditions remain poor. Average gross earnings for full-time workers in the hotel industry are the lowest in the UK and the industry has the highest incidence of low-paid workers at 59 per cent. Added to this is its dubious status as one of the lowest rates of unionisation, standing at 3.5 per cent today.[1] These elements make the London hospitality industry a particularly pertinent case study of the contemporary experiences of workers in the UK. Hotels are a subsector of the hospitality industry, but they are uniquely representative of the labour processes involved in hospitality work generally.

Method

I collected data through ethnographic participant observation over three months as an agency worker in London hotels. I conducted 35 semi-structured interviews with workers, supervisors and managers, which I have transcribed and coded. These interviews were spread across 25 different companies including 5 agencies and 20 hotels, 3 of which also used the same outsourced hotel management firm.

The Value-Form

In Capital vol. 1, Marx argues that surplus value is produced through the capacity of the capitalist to extract more labour-time and hence value than is required to purchase labour power from workers. This labour-time is not that of private individuals, but rather socially necessary labour-time, which refers to the “labour-time required to produce any use-value under the conditions of production normal for a given society and with the average degree of skill and intensity of labour prevalent in that society” (Marx 1976, p. 129). In a given labour process, the difference between socially necessary labour time and the concrete labour time of the working day is surplus labour time. The average ratio of socially necessary labour time to surplus labour time is what determines the rate of surplus value. Marx (1976, p. 326) defines the rate of surplus value as the “exact expression for the degree of exploitation of labour-power by capital, or of the worker by the capitalist”.

The commodity is a social use value represented by the materialisation of labour in the form of its exchange value i.e. money. As Marx first argued, it is an imaginary and “purely social mode of existence”, which has “nothing to do with its corporal reality” (Marx 1969 p. 171). Commodities thus have no necessary physicality. The concrete labour that produces the commodity “leaves no trace in it”. It is thus conceived as simply a “definite quantity of social labour or of money” (ibid). Marx’s analysis may appear to focus on tangible objects, but the theoretical framework clearly does not put physical limitations on the definition of a commodity. Marxist economists such as Fiona Tregenna have also argued that the physical properties of something neither qualify it as a commodity nor exclude it from the realm of commodities (Tregenna 2011: p. 287). It is the form of the organisation of the labour process that is decisive, not it’s content. Relations in the exchange of labour and capital alone determine the form. Capital’s indifference to the physicality of the product means that commodities are therefore essentially ‘amaterial’ along with the value contained within them. In other words, just as ‘production’ is not limited to the production of tangible objects, commodities are not limited to physical goods. The amateriality of the commodity-form thus allows both goods and services to be considered commodities.

The Service Commodity

The primary criteria that defines a service-commodity, as distinguished from physical commodities, is the relative simultaneity of production and consumption, independent of the material form or product of the labour. This means that for service commodities, the labour represented in the value of a service commodity remains living labour, while for the manufacturing of physical commodities, it is dead labour that can be re-circulated on secondary markets. Concrete labour may not be observable in same way for services commodities as for physical manufactured commodities. For a service commodity, the individual use-value might vanish with the cessation of the labour-power itself. The product of the labour may be an immaterial transformation, in the case of knowledge transfer, customer interaction or the production of affect. Production may have an aspect of perceptible material transformation (clean rooms) or it may have an immaterial transformation (knowledge, emotion).

The Hospitality Experience

The focus on ‘experience’ is derived directly from company literature and workers’ responses themselves. A repeated idiom, articulated over and over again by workers and managers alike was that “it’s all about the experience” (Danilo, hospitality assistant, Hilltop). Workers saw themselves as responsible for providing a “certain standard of contemporary life and service (Ammon, Operations manager, Amnesty Hotel). They viewed customers as not only paying for a place to stay, but also satisfying interactions and spaces. I take this prosaic idea and develop it in more depth, drawing directly from the experiences of workers through their own voices, as well as Hochschilds’ (1983, p. 6) insight that, “In processing people, the product is a state of mind”. However, her Weberian approach focuses on the “immediacy of the individualised commodification process”, which fails to locate her analysis in the wider context of social relations under capitalism. Emotional labour is not a “means of production” but rather a particular feature of concrete labour. Emotional labour is only labour insofar as it assumes a social form, mediated by exchange. An approach with a more rigorous engagement with Marxist theories of labour and value would make this more readily apparent and use it to present a more complete theory of the service labour process.

The Experiential Commodity

The experiential commodity is a dynamic and flexible product assembled through the reproduction of hospitable environments, the cultivation of affects, and the maintenance of supplementary services. The primary component is the provision of a room and associated services delivered with the affect of care. It is primary because it requires the largest investment of fixed and variable capital and is the main source of profit for hotel companies as room revenues generally contribute between 60 and 80 per cent of Total Revenues (Kinnard et al. 2001). Research has found that the “service encounter” between guests and workers is crucial for guest satisfaction and the profit chain (Urry, 1990; Schneider and Bowen, 1999). Accommodation is the tangible aspect of the primary component of the experiential commodity. Housekeepers and room attendants had essential roles in the production of the hospitality experience. Guests expected clean rooms, fresh bedding, unworn furniture and working facilities. The intangible aspects of the primary component of the experiential commodity are the reception services, which include catering to guests’ tastes, offering advice on the local, and generally providing attention on demand.

The secondary component includes the provision of food, beverages, and supplementary services (the quantity increases with the number of stars), which contribute 10 to 20 per cent of Total Revenues (ibid) Hotel kitchens produced the tangible aspects of this secondary component. The organisation of the kitchens and the production of food and beverage corresponded to both the size and star-rating of the hotel. Waiters, servers, and hospitality assistants (an official title for certain lower-level positions) in food and beverage-related departments produce the intangible elements of the secondary component of the guest experience. They take orders and deliver the consumable objects prepared in kitchens; however their primary role is the production of positive guest interactions, which guests expect due to the hotel’s commodification of affect. 

EXPERIENTIAL

COMMODITY

PRIMARY COMPONENT SECONDARY COMPONENT
TANGIBLE ASPECT

SEMI-VISIBLE PROCESS

ROOMS FOOD

BEVERAGE

SUPPLEMENTS

INTANGIBLE ASPECT

VISIBLE PROCESS

RECEPTION SERVICE INTERACTIONS

 

Service Assembly Lines

There is a progressive temporal order to the assembly of these multiple elements that corresponds to specific labour processes and departments. This temporal order or ‘service assembly line’ operates on a circuit that begins with a guest’s request, received by a member of staff in the ‘front-of-house’ [FOH] who relays this request to the ‘back-of-house’ [BOH] where the tangible and material aspects such as food ingredients or bedding are transformed as components of the commodity. Once this labour is performed, the BOH relays this information to the FOH who are expected to deliver this product to the guest with their own contribution of positive interaction in order to produce satisfaction. The FOH staff take orders and deliver the experience, while the BOH staff produce or reproduce the tangible consumables. Front and back of house thus have a symbiotic relationship mediated by both the customer and management.

The hotel’s service assembly line can be illustrated by following the path of its guests. As they enter the hotel, the doorman would greet them and the concierge would open their profile in the hotel registration system. Hotels typically keep detailed records of a customer’s purchase history and preferences. Based on this data, the receptionists would have already directed the food and beverage department to send the guests’ favourite bottle of wine up to their room. The luggage porter would bring their bags to their room in anticipation of a tip. Once settled, the guests might call reception to book them at table at the restaurant. Meanwhile, beyond the public spaces of the hotel, housekeepers, room attendants, maintenance, and porters would work to reproduce the physical spaces and articles of guests’ consumption. The laundry services would reproduce clean linen and employee uniforms while the porters cleaned the public spaces. Room attendants would scrub toilets, top up room amenities, and replace bedding. There were daily changes to room allocation and the order of cleaning demonstrating how different concrete labours must work together in relation to customer demand.

In the hotel restaurant, guests would be greeted and seated by the host. They would then place an order with their waiter who, using their capacity for emotional labour, would transform the atmosphere to appeal to the customer’s tastes in anticipation of a tip. The waiter would relay their order and special requests to the chefs who had spent most of the day preparing food for the dinner service. After the kitchen finished preparing the food and the bartender finished making their drinks, the server would deliver them to the table with a smile. Respondents’ experiences explained the difficulty of balancing the emotional labour of providing seemingly authentic positive customer interactions with the managerial imperative to turn tables and increase profits. The assembly of secondary components was in constant flux relative to the inputs of the kitchen (in terms of both ingredients and labour), the servers (in terms of capacity to manage tables and orders) and customer demand (in terms of attention).

Theoretical Implications

Braverman (1998: p. 361) was the first to note that the reproduction of clean rooms in hotels was “an assembly operation which is not different from many factory assembly operations”. However, in hotels the assembly is not linear like manufacturing, but rather a circuit in which customer feedback continually drives production as long as the service is offered. The relative simultaneity of production and consumption of services means that the customer adds a secondary element of variability to the variable capital of the labour process.

The concept of the service assembly line is important in the analysis of hotel work because it allows us to understand the production of experience as a service commodity. Approaches to service work have largely failed to recognise the production of services in terms of commodification and the progressive assembly of seemingly independent labour processes (Braverman 1998; Taylor and Bain, 2005; Korczynski, 2005; Sherman, 2011). Belanger and Edwards (2013) defined front line service as work in which the “contribution of the front-line employee to the labour process and the creation of use value appear at the same time” (p. 441). This conceptualisation is accurate in certain cases, such as those situations when the commodity produced is solely reliant on the activity of the front-line worker. However, as the case of hotels demonstrates, customers do not pay for a bed to be made separately from the service they receive by the concierge or the luggage porter. These seemingly independent elements are part of the same production process. The service assembly line is thus a dynamic circuit that operates according to a structured temporal sequence yet the specific results of that sequence can be continually altered or revised according to the customer feedback. Without descriptions of the detailed division of labour and the concrete processes at work, abstracting from those processes to examine the broader social relations of production remains ungrounded.[2]

 

[1] https://www.statista.com/statistics/287614/trade-union-density-hospitality-united-kingdom-uk/
[2] The necessity of capital’s expanded reproduction drives capital to produce new commodities, the value of which can only be realised through their consumption. Capital thus continually transforms humanity’s relation to nature and need, which is socially mediated; “Production not only supplies a material for the need, but it also supplies a need for the material” (Marx 1993 [1973] p. 92). The production of a new social use-value also produces the manner of social consumption. For example, a factory produces trains for railway companies who then use them to sell transport services to other firms and individual consumers who need to travel. A hotel produces rooms and positive affects, which people need when they travel. Production thus creates both the commodity as object and the consumer as subject. The produced object becomes integrated into normal use, earlier commodities become technologically obsolete along with the knowledge of their use. Production therefore produces “the object of consumption, the manner of consumption and the motive of consumption” (1993 [1973] p. 92). If capitalist production not only produces the object of consumption, but also the manner and motive of consumption, then hotels do not simply satisfy the expectations of customers, they produce their very needs and desires. The rating system of a hotel stratified allowed companies to differentiate between the object, manner, and motive of consumption by stratifying the market according to spending capacities and tastes. The hotels’ process of commodification and production of experience also produces the manner of its consumption.

 

Platform Capitalism and the Value Form

Platform Capitalism and the Value Form

Image Credit Chris Koch

Reposted from Salvage Quarterly

According to the speculations of techno-futurologists, left and right, the machines are here to liberate us. Most of the discourse is dominated by the neoliberal right such as Erik Brynjolfsson and Andrew McAfee and Andrew Haldane, chief economist of the Bank of England. Their arguments, avoiding questions of exploitation, are naturally popular with the establishment. McAfee’s best-selling book The Second Machine Age has been lauded by leaders at the World Economic Forum.

On the left, however, Paul Mason welcomes our new robotic overlords, in an intellectual synthesis that spans Marx’s 1858 ‘Fragment on Machines’ (treated by Mason as a prophecy), Bogdanov’s 1909 novel Red Star and Martin Ford’s 2015 Rise of the Robots, not to mention Andre Gorz. Nick Srnicek and Alex Williams offer a more qualified welcome to the possibility of full automation and a workless future. But even the best of these analyses, and even the most alluring visions of networked insurrection and high-tech communist utopia, have to face up to how these technologies have been used, historically, to deepen exploitation rather than overcome it. It is far more likely, in short, that new technologies will intensify drudgery and further limit human freedom. And it on this basis that we have to evaluate the impacts of platform technologies on the capitalist mode of production.


 

In Platform Capitalism, Nick Srnicek provides one of the first systemic Marxist interventions into the discourse around data-driven digitalisation, automation and the future of work. ‘Platforms’ are ‘digital infrastructures that enable two or more groups to interact’ within the constraints of the capitalist system. ‘Platform capitalism’ does not simply refer to the rise of alternative work arrangements such as temporary, independent, or other forms of precarious labour contracts, but rather an organisational shift in the system as a whole due to financialisation, increasing inequality, and the tech boom. According to Srnicek, the evolution of internet-era behemoths like Google, Facebook, Amazon, and Uber, as well as radically modernised pre-internet companies like GE, Siemens, and Rolls Royce has fundamentally altered the landscape of capital accumulation and property relations between firms. It is important to remember that the US military and other state-funded bodies made much of the original technological innovations for computing and logistics. The emergence of platform capitalism is essentially the commercialisation and industrial maturation of data-based social relations, theorised in the 1980s as ‘information capitalism’. Does the emergence of platform capitalism constitute a new mode of exploitation? In order to address this question, we must situate the empirical fact of their existence within a historical and theoretical context.

The evolution of these firms is inextricably bound to the history of asset-price Keynesianism. From approximately the mid-1990s, bubbles in asset-prices temporarily drove investment and created jobs and growth where there would have been known. This was inaugurated with the dot.com bubble. During the economic boom of the 1990s, huge financial investments were poured into telecommunications infrastructure. Millions of miles of new cable and major advances in software and network design allowed for the commercialisation of the previously non-commercial Internet. After the dot-com bubble burst in 2001, the combination of financial deregulation and an ever-increasing demand for financial assets led to another crisis in 2007-8, triggered by complex mortgage-backed securities. The crisis response of central banks, including quantitative easing and the lowering of interest rates, weakened returns on the more traditionally secure financial assets. This encouraged investors to look toward other asset containers – mostly property and the tech sector or what would soon be known as the emerging platform economy. Largely as a result of this staggering amount of new investment, the technology and connectivity required to transform every day human activities into digitally recorded data became relatively less expensive and widely available. Srnicek claims that this marked the twenty-first century shift toward the period of ‘platform capitalism’ in which data collection and monetisation is standard business practice.

Platforms are defined by four attributes: they provide an infrastructure for mediating exchanges between different groups; they follow monopoly tendencies driven by network effects; they strategically cross-subsidise different parts of the business in order to diversify user groups; and they maintain a proprietary architecture that mediates interaction possibilities. These attributes are too broad to tell us anything about the mode of exploitation involved; however, Srnicek’s typology of ‘platforms’ is based on their methods of revenue-generation: advertising, cloud-based service, industrial production, product-rental, and lean or gigging hubs. Advertising platforms (Google, Facebook) extract information on user behaviour, analyse that data, and sell it to advertisers. Cloud platforms (Salesforce) own hardware and software that are rented out to digital-dependent (read: nearly all) businesses. Industrial platforms (GE) are modernised hybrids of traditional manufacturing and contemporary logistics that use proprietary hardware and software to provide services and lower production costs. Product platforms (Rolls Royce) also transform traditional goods into rented services by collecting fees for the use of their products. Finally, lean platforms (Uber, Airbnb, Deliveroo) outsource all asset ownership other than software and data analytics, then profit as digitally savvy middlemen disrupting established markets (the impact of which will be discussed in more detail below). Each type of platform often combines one or more revenue models to make a profit; however, the most important asset for platforms is their intellectual property – company software, algorithms, and user data.

The reliance on diverse revenue models raises questions firstly about the structural position of platforms in the overall circuit of capital accumulation, and secondly about whether in the future we will continue to regard the worker as central to production. To address this, it is important to understand the rise of the platform in relation to different forms of exploitation or means of profit-making. The late twentieth century produced a procession of post-capitalist prophets who sought support in Marx’s writings, going back to the Grundrisse, to justify the idea that the workers of the world might eventually ‘step to the side of the production process instead of being its chief actor’. As Tessa Morris-Suzuki argued in the 1980s, the exploitation of surplus labour as the primary source of profit was not, according to Marx, ever intended as an eternal economic law. It was the defining characteristic of industrial capitalism, a particular historical system that evolved out of merchant capitalism, which in turn evolved from feudalism. Marx established that at the most abstract level, aggregate profit is essentially the monetary expression of aggregate surplus value; however, within the circuit of capital, firms can also accumulate profits through unequal exchange and redistributive phenomena between social spheres. Profiting through the former mode is called ‘primary exploitation’ while profiting through the latter is called ‘secondary exploitation’ (this distinction will be further elaborated below). Does the rise of the platform simply indicate a shift from primary to secondary exploitation, or does it represent a new mode of exploitation entirely? Has labour ceased to be the main source of surplus value?


 

In Platform Capitalism, Srnicek offers an innovative framework through which to address this question in his conception of data as ‘raw material’. Data is defined as ‘information that something happened’. It is distinguished from knowledge or ‘information about why something happened’. The act of recording data is either labour carried out by a human or a function of a human-programed computer algorithm – or, often, both. The production of data thus relies on labour power and a material infrastructure. Data is the ‘raw material that must be extracted’ from the ‘activities of users’ which are the natural source of this raw material. For Marx, raw materials are those parts of nature that have been filtered through previous labour (for example, ore that has been extracted from the earth). Nature is any environment that can exist independently of humanity and serves as the ‘universal subject of human labour’. For example, water is found in nature; yet when it is separated from a river, filtered, and stored in tanks, it serves as a raw material. Srnicek extends this Marxian distinction beyond flora and fauna into the realm of human activity itself. Nature becomes any potential activity humans perform in their daily lives: economic transactions, consumer tastes, user movement, location, and so on. The mining and processing of these activities as data transforms it into the raw material, which can be used in the production of service commodities.

The production of service as a commodity is just like the production of a good as commodities, except for the fact that in the production of service commodities, use-value might vanish with the cessation of the labour-power itself due to the relative simultaneity of production and consumption. We should remember that for Marx, the commodity, as a materialisation of labour in the form of its exchange value, is an imaginary and ‘purely social mode of existence’, which has ‘nothing to do with its corporal reality’. All that matters is that the labour process is subsumed into the capitalist form of primary exploitation. Primary exploitation takes place in the labour process itself and can be productive of surplus value, which is translated into profits through markets and competition. It is the human and social process of ‘exploitation of the workman’ by the capitalist, which relies on a classed monopoly of power over the means of industrial production. The rate of surplus value extracted in the labour process is ‘an exact expression for the degree of exploitation of labour-power by capital, or of the worker by the capitalist’. It is important to note that the rate of surplus value is not an expression of the ‘absolute magnitude’ of the exploitation because not all exploited labour produces surplus value. Platforms can profit from exploitation of surplus labour yet not produce any new value because that labour is socially unproductive. Yet, they can also profit through the production of surplus value.

Harry Braverman, writing in 1974, pointed out that when a worker does not offer labour ‘directly to the user of its effects’, but rather sells it to a capitalist, who re-sells it on the market, this is ‘the capitalist form of production in the field of services’. Many services, such as education and health care can also be productive of surplus value if they take a capitalist form. Marx himself referred to the transport industry as a service that was productive of surplus value. As service producers, platforms like Deliveroo are value-productive since the commodity produced is the change of place itself. Amazon provides a similarly productive service in their logistics centres. Google and Facebook sell advertising space and consumer behaviour data. These platforms are arguably a further development of a longer trend away from producing goods as commodities toward producing services as commodities. Over the past several decades, nearly all developed economies have seen a gradual decline in manufacturing production and a rise in services as a share of employment and GDP. The UK in particular has seen a sharp rise, with ‘services’ now comprising 79% of value-added GDP.

Cloud, and product platforms do not produce services as commodities; rather, they accumulate profits in the form of rents or other means. They are not productive of value and signal a potential shift from primary to secondary exploitation in profit making. Secondary exploitation or ‘profit upon alienation’ takes place primarily through financial and property relations that facilitate the collection of interest payments, rents or profits through unequal exchange (merchant’s capital). This form of exploitation appropriates surplus labour performed elsewhere and in doing so, merely redistributes a portion of the total surplus value of society. The existence of secondary exploitation allows for two things: first, it explains how commercial or financial capitalists can profit from non-capitalist spheres without the creation of new value; and second, it allows Marxian economics to account for the difference between the sum of profits and the sum of surplus values that emerges as values are transformed into prices. This is because profit is not the same as surplus value, though the rate of each tends to equalise over time.

The developmental arc of a successful platform generally begins with the technological disruption of an existing industry, and ends with the platform achieving the status of an industry gatekeeper or quasi-monopoly status. As platforms expand, they capture an increasingly large amount of data. Their quests for gatekeeper status lead them to diversify and encroach on one another. The rapid expansion of platforms has resulted in new monopolies, which now provide the basic digital and logistical infrastructures upon which much of the economy operates. The increasingly privatised ownership and management of public services and business infrastructure is indicative of the aforementioned shift from primary to secondary exploitation.

The enclosure of electronic ecosystems is a particularly interesting instance of secondary exploitation. Facebook has pursued a strategy of ‘funnelling of data extraction into siloed platforms’ in Africa and other lesser-developed areas of the world. Their ‘Free Basics’ program has brought Internet access to over 37 countries and 25 million people; however any service other than Facebook that wants access to these users is required to partner with the company and operate through their network and software platform. This combines monopoly (a sole producer) and monopsony (a sole buyer) power to reproduce the exploitative dynamics of accumulation by dispossession. Through these programs, the extractive apparatuses of imperialism have found their contemporary counterpart in the global enclosure of digital infrastructure and mining of data. Facebook’s ideology of connectivity as a good in itself simply serves the company’s interest and reproduces the exploitation of the economic periphery.

Each platform’s relation to a given mode of exploitation ultimately depends on its concrete form. Advertising platforms like Google and Facebook as well as lean platforms like Uber and Deliveroo use their intellectual property to mine data as raw material, which becomes one of the elements of constant capital in the selling of service commodities. Advertising platforms sell access to billions of users through sophisticated communication and consumer behaviour patterns, while lean platforms sell their particularly efficient means of transportation and user base and profit through a combination of fees. They also tend to rationalise informal economies of petty commodity producers and consumers into a formal economy mediated through proprietary means. Industrial platforms have a more traditional means of profit-making. Cloud and product platforms like Salesforce or Rolls Royce primarily extract rents from the use of proprietary technologies and infrastructure. Each platform uses a combination of primary and secondary means of exploitation to make a profit.


 

It is difficult to abstract from the concrete relations of the labour process. However, the classification of concrete human labour and its place in the industrial circuit of capital allows us to understand the relation between the production of value one the one hand, and the mere of accumulation profit on the other. It is for this reason that the analysis of platforms must avoid fatalistic metaphysical claims like Negri’s insistence that ‘there is no outside to capitalism’. Accounts of socio-economic phenomena that flatten distinctions between labour and non-labour under capitalism (for example, Beller’s claim that ‘looking is labour’, that merely glancing at an advertisement is productive of surplus-value), or between industrial capitalist and merchant or financial capitalist relations, serve to obscure rather than clarify the underlying processes. Contra autonomist-inspired approaches, which have tended to characterise all activities as potentially free labour, our approach retains labour’s specific meaning in relation to capital, which is neither omniscient nor omnipresent. As Srnicek reminds us, it is precisely because of the fact that, ‘most of our social interactions do not enter into a valorisation process’ that companies are competing to build platforms and capture monetisable data. This is a crucial point. On the one hand, the individual activities of users cannot be classified as free labour, since they are ‘naturally occurring’ and become ‘raw material’ through recording and processing. If it were a source of surplus labour, Srnicek points out, capitalism would have discovered an abundant new frontier of value, resulting in a global boom that shows no sign of appearing. On the other hand, the activities of those who produce the means of extraction and process those raw materials – those who design the user interfaces, write the algorithms, package and sell the analytics – can be classified as labour.

Of the five types of platform, the lean platform has had the most visible and immediate effects on the labour market and workers. Lean platforms have expanded rapidly in sectors that require intensive non-routine manual labour, which is notoriously difficult to automate (this is also one reason why service-sector productivity continues to lag behind that of manufacturing). Lean platforms essentially extend the low-tech model of temp-agencies or informal networks of day-labourers into really subsumed and digitally-mediated service sectors. They offer a private technological ‘fix’ to labour market precarity, taking advantage of job polarisation and the displacement of individuals into the relative surplus population. Companies aim to classify workers as independent contractors and pay piece-rates whenever possible, because they ensure a specific rate of exploitation for service-commodities that hourly wages cannot. Lean platforms reliance on low margins and the pursuit of market expansion over short-term profitability means that workers bear the brunt of any problems. Uber and Deliveroo have had disputes with their workers, who have repeatedly challenged the companies over poor treatment, contracts and wages. Uber’s recent dispute with Transport for London over its contemptuous attitude to regulations and the law is another manifestation of the same model of accumulation.

The broader effects of lean platforms on the labour market are a controversial topic of debate. Some organisations have significantly downplayed the importance of non-traditional work arrangements through digital platform-based enterprises, while others have hailed them as significant. In an online survey of 2,238 UK adults aged 16-75, the Foundation for European Progressive Studies [FEPS] found that 21 per cent of respondents (proportionally equivalent to 9 million people) tried to find work on platforms during the past year and 11 per cent of respondents (4.9 million people) were actually successful in doing so. Based on these results, The Work Foundation claimed that a reasonable estimate of the proportion of the workforce finding jobs through digital platforms would be between 5 and 6 per cent. However, recent research from the McKinsey Global Institute revealed that 20 to 30 per cent of the working-age population in the United States and the EU-15, or up to 162 million individuals, are engaged the ‘independent work’ typical of ‘gigs’ provided by lean platforms. Of the individuals in the McKinsey survey, 30 per cent had chosen their work and derived their primary income from it; 40 per cent had chosen independent work to supplement their income; 14 per cent preferred a standard employment relationship, but were primarily independent workers; and 16 per cent engaged in supplemental independent work out of pure necessity. It is important to note that nature of gig-working may lead to under-reporting and there is a large amount of overlap between different job categories, which makes it difficult to compare platform-based gig working with traditional employment. However, what is clear is that, regardless of the extent of these changes, they constitute an acceleration of existing trends toward casualization and precarity.

At the moment, huge amounts of venture-capitalist investment into technology, automation, and artificial intelligence means that firms like Amazon and Uber can continue expanding without actually making a profit. With the rise of platform capitalism, there is a strong possibility that we will see a corresponding rise in the organic composition of capital i.e. a larger share of constant capital or the inert elements (tools, materials, equipment) compared to variable capital or living labour. In Capital vol. 3, Marx argues that this has direct implications for industrial profitability, which might explain the move by many platforms away from service models of production, towards a model that allows them to profit through collecting rents from the use of their infrastructure or appropriating a share of profits from other sectors. This is not sustainable; however it is likely too early to tell what this means for the future. On the one hand, advertising platforms like Facebook show no signs of slowing down. Facebook reported second quarter net income for 2017 at $3.89bn, a 71 per cent increase compared with the previous year. On the other hand, the lean platforms that have driven the rise of the gig economy are already showing signs of slowdown. JPMorgan Chase Institute has found that participation in labour platforms has levelled off and that workers’ monthly earnings from labour platforms have fallen by 6 per cent since June 2014 as a result of wage cuts and lower participation. Despite these findings, some of the world’s leading think tanks are recommending that ‘a platform strategy and the business know-how to exploit it is more important than ‘owning’ an ecosystem’. By 2018, the International Data Corporation predicts that more than 50 per cent of large enterprises will either create or partner with industry platforms and that the number of industry clouds will reach 500 or more by 2018. Time will reveal the veracity of this claim, but the shift toward a rentier form of accumulation through secondary exploitation shows no signs of stopping.

The conceptual development of the term ’platform’ as a new type of firm that relies on the strategic mining of data is a useful contribution to both Marxian and technological discourses that marks a novel economic phenomenon. However, we should be wary of claiming that this is a new mode of exploitation within capitalism. Most of the industry leaders in the platform economy don’t actually produce anything other than a means to profit from proprietary advantage or the sale of advertising and commodifying social data. Rather than signalling a fundamental shift in production and the condition of possibility of a technological utopia, they actually represent a regressive shift back toward what Marx referred as ‘antediluvian’ forms of accumulation i.e. secondary exploitation.

The evidence indicates that, contra the digital dreams of liberal Californian ideologues or post-capitalist utopians, platform capitalism will not provide the technological impetus to a future free of exploitation and drudgery. It might not even provide the robotic libertarian future romanticised by the Silicon Valley entrepreneurs. Sales of industrial robots to the UK have fallen between the period of 2014-2015 while only 14 per cent of business leaders are investing in AI and robotics. Ultimately, many low-margin service platforms will fail over the next few years; monopoly tendencies and cross-subsidisation will push other firms into luxury markets providing expensive convenience on demand; and those remaining will be forced to amalgamate their model into more traditional business models that rely on product or industrial platforms. The rise of platforms may inspire technological utopian rhetoric, yet it retains the same basic forms of twenty-first century global capitalism. Unless we collectivise and ‘nationalise the platforms’, changing their very form, there is little hope for a utopian future.

platform art

 

A Note on Fieldwork

MI-BW-HO-BR-AH-01

A key element of the fieldwork involves the contrasting narratives of management and workers with regard to conflict and cooperation in the workplace. Through exploring contrasting accounts between workers and mangers at the sectorial level, I will be able to articulate the politics of ‘service’ production in the workplace. From the data, I hope to be able to detail different perceptions of the labour process, as well as the modes of conflict and cooperation.I hope to build on materialist theories of the ’structured antagonism’ as well as the political dimensions of the value-form literature.

With regard to workers’ experience in the hospitality industry, I’ve found the prime mover of the employment relation is whether the contracted staff are agency or in-house. The secondary factor which determines different conditions are then between staff who receive remittances from the trunc system and those that don’t. The trunc system is a major source of conflict. Despite working for different companies, workers across the industry have remarkably similar conditions and issues. Each interviewee has so far given both a portrait of their workplace, as well as highlighted key conflicts over the course of their employment. It is clear that the rhetoric and strategy of managers contrasts with many of the accounts from workers themselves. However, this is most stark when they are a member of a union. Unionised workers often tell me that they speak up and aren’t afraid to say when things aren’t right. Most of the nonunion workers I’ve talked to try to adopt the views of management and often internalise them – the ’new spirit of capitalism’ is relevant here as well as Hochschild’s ‘managed heart’.

At the professional level, workers’ dissenting narratives are often missing. For example, the British Hospitality Association – the main industrial lobbying body in the UK – provides a wealth of literature on the industry and argues for its economic significance for the national economy. However, they ostentatiously omit accounts of the reality of work for most nonsupervisory and non managerial staff. Participants’ accounts of workplace tensions and the various tactics that mangers use to suppress dissent – from intimidation to wage theft – will be used as a counterpoint to the managerial narratives that I have collected so far, which fail to recognise the same problems at work. Most managers give a fairly glossy picture of their workplace despite the fact that the industry is plagued by violations and low pay.