[This is a talk I gave at FSCONS 2013, on November 9]
I've been studying, as well as writing and talking about the dystopian side of technological progress for so long that eventually I got tired of it. So, instead of focusing on critical issues, I began to do research on what might be called the ”thought forms” involved in utopian/dystopian or positive/negative debates, particularly regarding digital technologies.
The objective developments are clear enough, and often not very reassuring, but they are at least talked about and debated. The subjective, really personal side of it all, however, seems to me to have been less clearly articulated or even brought to the fore forcefully enough. So, what might happen if we, in this largely technical context, begin to really focus on ourselves, as human beings? This has come to occupy my thinking more and more, and hence the title of this talk.
Let's start with two examples of what might soon become objective changes:
1. Recently the OECD issued a document called PISA 2015: Draft Collaborative Problem Solving Framework (PISA = Programme for International Student Assessment). PISA has come to the conclusion that collaboration between humans might be a good thing – but how do you assess that? It's difficult to control the parameters involved in an actual collaborative situation sufficiently, in order to be able to assess it in a way that makes the results comparable across different years and countries. Therefore, it has "been decided to place each individual student in collaborative problem solving situations, where the team member(s) with whom the student has to collaborate is fully controlled. This is achieved by programming computer agents."
The document is quite explicit as to the reason for this artificiality in the assessment of (human!) cooperation: "When humans collaborate together, it often takes considerable time for making introductions, discussing task properties, and assigning roles during the initial phases of CPS activities [...] and also for monitoring and checking up on team members during action phases” [CPS = Collaborative Problem Solving]
I can't help getting the feeling that this very document must have been authored by a programmed artificial agent.
2. In September this year two researchers at Oxford University published a paper called The Future of Employment: How Susceptible Are Jobs to Computerisation? In their estimate ”about 47 percent of total US employment is at risk”. "Our model predicts that most workers in transportation and logistics occupations, together with the bulk of office and administrative support workers, and labour in production occupations, are at risk. […] a substantial share of employment in service occupations […] are highly susceptible to computerisation." Furthermore, computerisation will principally be "confined to low-skill and low-wage occupations. […] as technology races ahead, low-skill workers will reallocate to tasks that are non-susceptible to computerisation – i.e., tasks requiring creative and social intelligence. For workers to win the race, they will have to acquire creative and social skills." [my emphasis]
So, here we have, on the one hand, a scenario in which the cooperative capabilities of students are to be assessed by means of cooperation with computer programs, because ordinary human cooperation is too messy to assess in a standardised fashion. And, on the other hand, we have a scenario in which the only jobs left to humans are the ones demanding creative and social skills. To me this indicates two different trajectories along which what might be called ubiquitous computerisation are heading.
One trajectory means that human beings are measured and assessed according to standards that are amenable to machine learning and machine intelligence. The tendency here is to disregard all the messy, all-too-human traits of which we, at heart, are so fond –- in a friend, in a colleague, in a lover. The possible consequences of this trajectory seem to be clearly dystopian –- speaking now as a human being. But from the perspective of the OECD bureaucracy, it is equally clearly utopian. It gets us closer to that ideal state of affairs in which students, as well as the educational systems of different countries, can be compared quite mechanically and efficiently. It will certainly be cheaper than any all-too-human -- and because of that incomparable -- kinds of assessment. And why trust humans at all in this matter?
The other trajectory, in which the only jobs left to human beings are those requiring creative and social skills, is less clearly dystopian. It could be argued that it might be a good thing. Think of all the more or less boring or routine jobs that won't bore anyone anymore, because machines never get bored. On the other hand, we could note the wording ”for workers to win the race...”. There is a race going on between humans and machines, for jobs.
Now, speaking of trajectories, or trends, the question arises: How relentless, how deterministic, are these developments? If you ask Ray Kurzweil, Google's Director of Engineering, he will speak of The Singularity, a state of affairs in which the capabilities of technology far outstrip the capabilities of technologically non-augmented humans. We won't be able even to enter the race without more or less merging with our technological creations, which will start to evolve independently of us, if they haven't already begun to do so. The latter view is espoused by Kevin Kelly who, in his book What Technology Wants, calls the emerging results of this allegedly autonomous technological evolution The Technium.
I think there are solid reasons to think that Kurzweil's Singularity scenario, based on an exponential growth of machine intelligence, as well as Kelly's evolutionarily deterministic Technium, are really nothing more than fairy tales.
I don't mean that in a disparaging way. The Singularity and The Technium are mythological conceptions, and this is important, but they are emphatically not science. The Singularity, in particular, is now almost becoming a household word among many educated people. It catches on because it ties in so well with the kind of science fiction futures we've been promised for well over a hundred years by now. And now, at last, science and technology seem to be catching up with fiction.
One of the clear signs that The Singularity is a myth is that it is very inspiring. That's what real myths are for. And the thing with inspiration is that it can be exhilarating as well as terrifying –- and sometimes it's the terror that exhilarates. These mythically induced emotions, exhilaration and terror, constitute the life breath of utopias and dystopias, which is to say that they're not very intellectual, or even –- I fear –- very intelligent. Intelligence is a tricky concept, and I think it means rather different things for human beings and machines.
I also think that we are subject here to a kind of collective illusion which says that rationality, logical arguments, efficient (and preferably cheap) procedures are the essence or the epitome of ”intelligence”. And I think that this is a very serious mistake, a very dangerous illusion indeed. But it can be difficult to really grasp this, unless one manages to get very much closer to our everyday lives when thinking about it.
But let us stay with the big picture a little while longer. Recently I've noticed that many more people, who are professional technologists in the digital industries, are becoming critical towards some of the possibilities inherent in the increasing technologisation of society. I recall, for example, a long conversation I had with an information security consultant, who also had ties to Swedish intelligence agencies. His view of the future, privately, was extremely dystopian. I almost felt like I was talking to the Unabomber himself.
Ted Kaczynski, a.k.a. the Unabomber, is a mathematician who became so enraged by the impact of modern technology that he decided to kill to make his point, and he targeted individual technologists. He is also very smart, very articulate. You can study his really stark, really dystopian vision in his collected writings, Technological Slavery. (If you're a dystopian yourself it should be a real treat.) His vision of a future in which the technological society is not destroyed – despite his sincere wishes – goes like this:
"Suppose the system survives the crisis of the next several decades. By that time it will have to have solved, or at least brought under control, the principal problems that confront it, in particular that of "socializing" human beings; that is, making people sufficiently docile so that their behavior no longer threatens the system. That being accomplished, it does not appear that there would be any further obstacle to the development of technology, and it would presumably advance toward its logical conclusion, which is complete control of everything on Earth, including human beings and all other important organisms."
This is exactly the same vision as in Kurzweil's Singularity, only with a resounding minus where Kurzweil sees mostly a plus.
Now, to conclude, let us go back to my initial example, the OECD's ”Collaborative Problem Solving Framework”. Here a thoroughly human context is invaded by technology for one reason only: it makes something qualitative and messy ostensibly measurable. Never mind that the overall context becomes so different that it is really unclear what actually is measured. This I see as an example of measurement mania.
My other intital example concerned the replacement of humans by machines in the job market. Here, too, there is a kind of mania at work. There are many jobs that I, for one, think both can and should be done by machines -- but where does this vision, no, this real trend of wholesale replacement, based on procedural efficiency and relative cost more than anything else, come from?
I think both the measurement mania and the cost and efficiency mania come from the collective illusion I mentioned earlier –- the view that rationality, logical arguments, efficient procedures are the essence, the epitome, of intelligence and civilization.
Unfortunately this is an illusion that by now is thoroughly manifested in all kinds of structures, institutions, routines. So it won't go away just because some people start to wake up and see it for what it is. But that doesn't mean that its influence is inevitable and deterministic. It only means that it is strong. And, fortunately, there are forces working incessantly to undermine it.
Yesterday, as part of my job as a consultant, I talked with some junior high school teachers about the impact of the new mandatory curriculum for all Swedish elementary schools, called Lgr 11. An overreaching demand in Lgr 11 is so-called entrepreneurial learning, which emphasises abilities such as creativity, curiosity, self-confidence, the will and ability to try out your own ideas, etc. This clearly goes against the grain of traditional industrial school curricula and thus against the grain of the very industrial society that has fostered the illusory mania of narrow rationality. It was quite inspiring to hear those teachers speak of the sometimes amazing changes pupils went through when they suddenly realised that their own interests, experiences, and ideas mattered.
What really stuck in my mind was a question put by one school's headmaster, a question that was evidently of some concern: ”But how do we measure the progress we have noticed with entrepreneurial learning?” It was clear from what she and the teachers said, that the positive learning changes they had noticed could really only be noticed by themselves, on the basis of their personal knowledge and experience. It was, in other words, based on mature human judgment. Which is as it should be. You actually don't have to ”measure” in order to know. It may be quite enough to trust the judgment of experienced practitioners. But technologised bureaucracies are, by default, really uncomfortable with this. And clearly this kind of (all-too?)human judgment messes things up for the PISA mentality.
Kids in these schools will have a hard time accepting unnecessary formal and mechanical structures as they grow older. They don't know yet what they're really up against, but nothing in a life worth living is easy, is it?
As long as there are human beings who can stand up and say – with the actor Patrick McGoohan in the 60s TV series The Prisoner – ”I am not a number. I am a free man” I, for one, refuse to believe in neither utopias nor dystopias.
/Per
Comments