Artificial intelligence, tech workers, and universal income: An interview

 Artificial intelligence, tech workers, and universal income: An interview

What are the possibilities for artificial intelligence to free us from the drudgery of work? How about the class outlook of those developing those technologies? Read and find out.

What follows is an interview with a friend of mine, Ron, who’s a software engineer. Although we didn’t meet through politics, I did come to discover that, with his fellow tech workers, Ron spends a lot of time considering, discussing, and theorizing about artificial intelligence and its potential for human liberation. As someone who’s basically technologically illiterate, I find this all very fascinating. I also came to find out that Ron’s interested in Universal Basic Income, which he views as the “best prevailing theory for the path forward.”

I should say, at this point, that I fully expect most regular libcom readers to have some real disagreements with Ron. Throughout the interview certain assumptions - about capitalism, politics, the economy - are taken as given. I have deep criticisms of Universal Basic Income, myself. However, I think it's fair to say that most UBI advocates have their hearts in the right place.

Those of us with liberal arts/humanities background are seriously overrepresented in the anarchist movement – especially when it comes to the theoretical end of things. Reaching out to and engaging with politicized technology and computer workers can only benefit the movement. If we're lucky, it might even help to inject a bit more of a class analysis into the UBI milieu.

More to the point, tech workers hold an incredible amount of power in the modern economy. Militant, organised tech workers could provide an invaluable choke point in any company or industry. And while, at one point, high-tech workers may have enjoyed certain ‘white collar’ benefits, as the economy becomes increasingly digitalized, digital workers are becoming increasingly proletarianized.

By understanding how tech workers view their position within the economy and what they envision for a more just world can begin to understand how to better link up with those same workers in our own workplaces.

Sum up the general attitude toward UBI amongst supportive tech workers.

The support, as I perceive it, seems to be primarily from the fact that if you assume the AI/robotics revolution is inevitable, there’s no good way to handle this. If compensation is based on a measure of productivity, and most productivity is by owners of AI/robots, then most compensation will flow in this direction. The simplest solution, in a broad sense, is to take this compensation, using the power of government, and redistribute it back to the people. There’s some ancillary appeal in the fact that it could replace other welfare programs, but these details are irrelevant to the idea. The perception is that there’s no other mechanism that can work, since the concept of “working” for “pay” doesn’t exist in a world where robots can do any human job.

How do most people in the UBI movement see UBI coming about?

The most common thinking is that it’s marketed as a replacement for welfare. This is thought to appeal to both sides of the political spectrum. On the one hand, you’re reducing the moralizing that often goes with government welfare programs. On the other, you’re meeting people’s needs in a simple and straightforward manner. Strategically, things like solving in problem of inequality, or raising the minimum wage, or changing the ratio of worker pay to executive compensation all serve to inch society towards the goal of UBI.

If people get used to the idea that they deserve a living wage - regardless of education or “skill” level - then as more people realize that we are all uneducated and low-skilled compared to an AI/robot, they too will feel like they deserve a living wage. If there are no jobs available, then maybe this wage can come from a social program? Since the framework of the economy would otherwise be intact (Apple/Google/Walmart still wanting our money), then maybe people won’t feel this scheme is the Communism of the Cold War era we as a society fear.

Does it play out practically on the job? In other words, are UBI-oriented tech workers more likely to raise issues at work?

Currently, no one talks openly about this in the broader tech world. The exception is people directly working on AI. AI researchers are not shy to talk about the political implications of their work, although few have yet to go into great detail. In the past 6 months, there’s been a massive uptick in the amount of mainstream news articles regarding AI taking human jobs, and possible rising unemployment as a result. Incidentally, there’s been several reputable studies that show, I believe accurately, that increased robotics and automation hasn’t costed jobs up to this point, because the increased productivity allows companies to expand and grow. These same studies haven’t addressed wages however, which we know have not kept pace with worker productivity. I would hypothesize that at least part of the wage suppression is due to increased automation over the past 25 years.

However, people in this field are more likely to support higher wages for workers across the board, which could be due to a conscious or subconscious realization of what technology might bring. The issue is still in its infancy, and I’m sure as time goes on, there will be battle lines drawn around these issues. The “sharing economy” is a good idea, when it’s actually about sharing.

Why is the sharing economy important to a UBI discussion you might ask? Because no matter what form of UBI gets implemented, if any at all, it still wouldn’t be enough to match the incomes needed to maintain our current quality of life; this means humans might find other things to do. Might it be the case that 20 years from now, we all work a few hours a day, sharing our skills/talents by means of a smartphone app?

When a computer can write computer programs better than me, maybe there will be a service that lets people hire me to do handyman work, or help with lawn maintenance, or drive them around, or any number of things that other humans might prefer a human to do for them, that could supplement a UBI.

More generally, are other economic or labor issues discussed on UBI forums? What's the general perspective – political, social, or otherwise – in UBI circles?

On UBI forums, there are three main camps: those that believe computers/robots will be humanity’s salvation, those that believe the AI/robotics automation will never happen, and those that believe AI/robotics revolution will lead to new industries and jobs that only a human can do. If you believe AI will be the salvation, then the other problems solve themselves-- robots build all the cool stuff we love (iPhones and Netflix), food is cheap and readily available, healthcare is high quality and cheap or free, etc. There’s not much reason to talk about other political/social issues, because once we pass the initial upheaval, we have nothing to worry about. These people believe that once robots start taking white-collar jobs (programming, lawyers, doctors, teachers, etc.), UBI must be inevitable, right?

People that believe AI won’t happen I think are completely wrong, but they too have no reason to think about other economic or labor issues, because we can do what we’ve always done and make minor tweaks to business/tax policy and “let the markets sort it out.”

People that believe new industries and jobs will arise also have no reason to delve more deeply into other economic or labor issues either, because we can mostly keep doing what we’ve been doing. So from my perspective, other than do-nothing-much, or UBI, there’s not much serious discussing into alternatives.

Why do you think tech workers in particular are attracted to UBI as an idea?

Fundamentally, it’s simple and easy to think about, potentially very efficient and effective, no one has been able to get any alternatives to gain traction, and it does seem politically viable eventually.

Imagine eliminating most of the anti-poverty programs and replacing them with UBI? Musicians and artists can pursue their crafts while having a way to support themselves. Tech geeks can form the startup they’ve always dreamt about and still be able to eat. Even the lazy pothead can sit around playing Playstation all day, if they just got fed up with having to work. All because robots can still maintain economic productivity.

What's the most common objection to UBI you hear from other tech workers?

That it’s too costly (which is why you often hear it tied to winding down most of the welfare programs and using that money) and that it’s “socialism”. “Socialism” in this context meaning that people don’t want their tax dollars going to the aforementioned artists/musicians/potheads because they’re lazy and should have to work, somehow, somewhere.

Alright, have some fun with this last one: take us on a little journey to a time when technology is used to its full liberatory potential. What does that world look like?

This is something i think about a lot, and my opinion is constantly changing. I once had an argument with a friend concerning this, and he is convinced that Facebook and Google, et al, are amassing the power and technology to know everything about everyone, and will use this power to manipulate us and there’s nothing we can do. I don’t disagree that this is possible, and a recent research project showed how merely changing the ranking of top-10 search results massively influences people’s opinions. My argument however was that the same predictive, intelligent software that lets Facebook know you better than your own mother (as another study has demonstrated), could also be used for social good. What if AI allowed us to identify neo-nazis or immediately fact check politicians and policy makers?

The other dimension is that smarter technology means smarter tools. There’s a video recently of a Disney artist drawing the Little Mermaid in 3D using virtual reality goggles and it’s very impressive. There’s similar videos of people sculpting in 3D using motion control wands (like the Nintendo Wii). Today, if I wanted to sculpt something bigger than a lump of clay, i’m out of luck. With these new technologies however, anyone with even minimal talent can use them, and create works of art presently only achievable by the very luckiest of artists. Right now, you have YouTube millionaires, who make their living by ad revenue on their home-made YouTube videos, completely and entirely outside the sphere of the traditional media monopolies. TwitchTV allows video gamers, who are very dedicated to their crafts, to support themselves playing games and having people watch them and donate money directly to them for doing cool/entertaining stuff. I don’t think I have to describe how the Internet has impacted people's ability to organize and share ideas. Every new technology brings with it a wave of creative new uses, and I don’t see this changing.

When you order a pizza, rather than a person bringing it to your door, a self driving car brings it to you, and you just go outside to get it (or a robot walks it up to you). When you buy a new toothbrush, a flying drone drops it off at your house the same day-- no human needed, and you get your order right away. Or, after you’ve cooked your dinner, you might have made too much, you can just package it up, an autonomous car comes to your door, and drives your extra dinner to a homeless person or someone in need. Maybe you don’t even have to be poor or homeless, you might just want a home cooked meal and are just lazy, and someone near you has extra, and they don’t mind anonymously and easily summoning a robotic drone to deliver it you, all arranged via a smartphone app. Since robots now do all the cleaning and maintenance, maintaining public shelters for the indigent isn’t a big problem anymore.

However, this all precedes the era known as the “Singularity”, based on the concepts of “superintelligence”. The thinking goes that once a computer can surpass a human level of intelligence, it’s impossible to predict what will happen after this point-- similar to the “event horizon” of a black hole. You can have a scenario where the robots destroy us all and evolve to become humanity’s legacy to the universe, you could have a situation where we live our lives in bliss and harmony. Computers will cure cancer and aging. People who want to be farmers can have their own farm, people who want to drink all day and have sex in virtual reality can do that. If you like to hike mountains, you’re free to do that as well, and if you fall, a robot will catch you. Nick Bostrom details the ins and outs of this idea in his book “Superintelligence”, you can also read a more positive take on this in “Engines of Creation” by a leading nanotechnology research K Eric Drexler. Solutions such as UBI are thought of as stop-gaps or transitions to this Singularity era.

Posted By

Chilli Sauce
Oct 13 2015 03:19

Share


  • Even the lazy pothead can sit around playing Playstation all day, if they just got fed up with having to work.

Attached files

Comments

Joseph Kay
Oct 16 2015 14:13

Yeah, you'd only need strong AI to automate tasks that require reflexive thought or something. Any job that can be explained in flow chart can in principle be coded as an algorithm (afaik this is the case for lots of skilled work). The limit to things like packing robots hasn't been the coding side but the sensor/feedback side, e.g. machine vision. As advances are made here, automation becomes viable.

(note this robot doesn't need the batteries to be in a standard position, it 'sees' them and quickly adjusts. it would probably be cheaper to just funnel the batteries into a standard orientation, but it's a trade show demonstration of the tech, 'intelligently' packing objects having been one of the warehouse jobs not so automated so far, unlike picking and portering.)

Khawaga
Oct 16 2015 14:50

Gregory, you really are thick. AI does not mean the singularity. While the singularity is sci-fi, AI is not. AI does actually exists in the here and now, but you seem to assume that strong AI (i.e. a being with a general intelligence like humans) is what AI really is. Sadly to say, you seem to know very little about this topic at all.

And again, rather than engaging with other posters, you yet again ascribe ideas to us that we don't have. Now we are all transhumanists in favour of the singularity. If you had bothered to actually read some of the posts you may have realized that some of us are extremely critical of that belief. That's why I have a problem with the left accelerationism of Srineck and Williams; they are in part transhumanists. And there is a lot of shit thinking and politics that comes with that.

JK wrote:
But logically you could accept their main claim while criticising their particular vision of modernity, and that discussion would be part of what they're calling for, i.e. a discussion of what the future should be. Maybe it's even deliberately provocative (it is a manifesto, after all) in order to provoke that discussion.

Agreed. I do like the call for an epistemic acceleration and think it is warranted, but I am very critical of their version of it which I think represent a deceleration and an uncritical return to Enlightenment/modern thinking. And their argument is based on some strawmen (e.g. the folk politics and localism they think is endemic on the left; far from it I think it is rather the opposite), is contradictory etc. Whether they were deliberately provocative I don't know, and I don't think it matters, because they have provoked a lot of discussion and for that I think that the manifesto I great despite all the major problems I have with it.

Chilli Sauce
Oct 16 2015 18:54

Greg, perhaps you noticed my suggested structure for the debate?

Here's a clue: its not libcommers for the singularity.

OrangeYouGlad
Oct 16 2015 22:50
GREGORYABUTLER10031 wrote:
Ron appears to be an acolyte of what some are calling the California Ideology - the reactionary techno utopianism common among professionals in the technology industry

...

Ron here...

I think you're right to point out that in the tech world, and with silicon valley-ites, and "futurologists" in general, this automation future is viewed very positively, as a type of "salvation" or utopia. I don't necessarily think these changes are inherently positive or negative, but I view them as inevitable, and they have the potential to be positive if we make them positive-- which we need to start doing right now, very vigorously. If we fail, the future you envision with people waiting for their welfare check surrounded by squalor is very possible.

I don't see a future where these technologies don't exist, and these technologies necessarily will fundamentally change many aspects of our society. Whether this change is good or bad depends on us.

However, what do you view as the ideal or likely alternatives? I can only envision the alternative to ubiquitous automation being a kind of luddite society maintained with heavy-handed government intervention, but I fully acknowledge I likely have a blindspot to other possible outcomes.

Joseph Kay
Nov 5 2015 07:13
Joseph Kay wrote:
I've been asked to write something on Srnicek & Williams' 'Inventing the Future', which also makes a full automation/UBI gambit).

This is up here now: http://thedisorderofthings.com/2015/11/04/postcapitalist-ecology-a-comment-on-inventing-the-future/

There's another Out of the Woods contribution coming in the next few days.

Khawaga
Nov 5 2015 14:54

Good piece JK. I now look forward to reading Inventing the Future even more (for some reason I can't get a hold of a copy in North America).

I'm curious to see what they write more about Project Cybersyn (I heard Srineck gush about it at a talk) as the clear problem with that project was that they were trying to literally engineer socialism but ran into the usual problem of not taking social relations into account. Just from reading the manifesto and few reviews of their book, it seems like they take an extremely technocratic approach to... almost anything, and they appear to be completely beholden to the notion of progress. They probably should read some Virilio...

Their "folk politics" category also seem to be a bit of a straw man; a construction of theirs where they can put everything they don't like into. Do they explain what they mean with it more in the book?

Joseph Kay
Nov 5 2015 16:01

They define folk politics as a tendency rather than an ideology, but yeah it's quite a fuzzy concept. Another Out of the Woods piece that went up today says a bit more on it: http://thedisorderofthings.com/2015/11/05/why-we-cant-let-the-machines-do-it-a-response-to-inventing-the-future/

Khawaga
Nov 6 2015 00:35

That's also a really great piece. Thanks for posting. I gather from both your and the other piece something I thought may be the case with their book: the very "masculine" approach to things. Perhaps unwitting on behalf of S&W, but makes perfect sense due to their transhumanist inclinations; and H+ can at least in part be described as a male, white power fantasy.

Joseph Kay
Nov 6 2015 09:45

They do reference people like Silvia Federici and Walter Mignolo, and are keen to distance their advocacy of 'progress' from colonial history. I guess the question is to what extent they add those thinkers in, without modifying the underlying argument accordingly. Like e.g. Federici (in Revolution at Point Zero) makes an explicit critique of full automation as a goal:

Federici wrote:
Reflecting on the activities that reproduce our life dispels the illusion that the automation of production may create the material conditions for a nonexploitative society, showing that the obstacle to revolution is not the lack of technological know-how, but the divisions that capitalist development produces in the working class. Indeed, the danger today is that besides devouring the earth, capitalism unleashes more wars of the kind the United States has launched in Afghanistan and Iraq, sparked by the corporate determination to appropriate all the planet’s natural resources and control the world economy.
Joseph Kay
Nov 6 2015 09:48

Chilli Sauce/OrangeYouGlad - if this Inventing the Future stuff is a derail I'll start a new thread.

Chilli Sauce
Nov 6 2015 15:31

Not at all, man!

Flava O Flav
Mar 27 2016 14:50

This is that thing on Mason's postcapitalism I was writing. Took a while.

https://selfcertified.wordpress.com/2016/03/27/the-new-world-in-front-of-us-mason-postcapitalism/

Spikymike
Mar 27 2016 15:35

Thanks Flava.. I've saved to read later but it could also usefully be linked to this other discussion here: http://libcom.org/forums/theory/paul-mason-end-capitalism-has-begun-18072015

Flava O Flav
Jan 30 2017 12:44