[What follows is the manuscript I prepared for a workshop I did in Oslo last Sunday, for Montessori teachers from different European countries, most of them, I gathered, working at the high school or junior high school level. The event was arranged by Waterpark Montessori Norge.
During the workshop we covered much but not all of what is said below, since there had to be plenty of room for discussion. I'd like to thank the participants for a quite pleasant and rewarding day together!]
[First, we watched this interview:
I mentioned that Ray Kurzweil was recently appointed Director of Technology at Google, a fact which should be kept in mind when evaluating the impact of his ideas and claims.
Then I began by holding up a smartphone and a physical book, and proceeded to compare them as mediums and tools for education and thinking:]
Part I: The Smartphone and the Book
Regarding the smartphone you have no idea what's going on beneath the surface. It seems highly intuitive with its touch screen and scrolling function, but it's all an illusion. There's no actual, intuitively graspable relationship beteween the felt movement of your finger and what's actually going on inside. Furthermore, the possible or potential content of what you can access via the screen is virtually infinite, because it's connected to the cell phone network and to the Internet. The smartphone is a computer, giving you access to anything a networked computer can access, which is almost infinite. (It also makes you accessible in ways you're probably not aware of.)
In other words, a smartphone is a great ”distractor”. It gets your attention all right, but then it very easily scatters it.
The book, on the other hand, is a physical object. When you turn a leaf you literally turn it. You can feel the texture and weight of the paper. The marks, the letters, are actual physical marks, not pixels on a lit up screen that only represent letters, whose real existence is in binary code. You are directly aware of the book's beginning and end, and if it opens up some inner, imaginative space – that's all your work, a matter of your very own imagination, not a matter of access to a confusing multitude of web sites, movies, music, and games.
In other words, a physical book depends very much on you for access. It demands your concentration. If it doesn't get it, if you do not yourself activate your imagination, it remains silent. It doesn't automatically grab your attention, like the literally dynamic smartphone.
So, it's a ”concentrator”, powerful but not very forgiving if you're lazy.
Comparing the book and the smartphone yields some interesting observations.
The smartphone and the book are both physical objects, and both are based on code.
The complexity of the interacting codes (programs) in a smartphone is staggering, compared to the alphabetical code of the book, but they have this in common: they both work by means of abstract codes.
The difference is that in the case of the book, you are yourself intimately and directly engaged and involved in the decoding of the message, and if you write something yourself, on a physical surface, with a physical implement, you are yourself the immediate author of the visible, literally tactile text. You can touch it. It won't go away. There's a permanent feel to it.
But in the case of the smartphone, on which you can write too, the text you see is just a visual representation of what actually happens when you touch what only seems like a keyboard on the screen. There is no direct connection between your finger touching the screen and the text you see appearing on the screen, a fraction of a second later.
Normally you don't pay any attention to this, because it all seems so intuitive and obvious. But it's really quite illusory. You could say that it fakes writing. What you subjectively experience as simply writing something, is actually just a part of all that's going on, and can go on, beneath the surface.
These invisible goings-on include other programs in the phone keeping tab on everything you do with it, without alerting you in any way. You are, then, very much in the dark about what's really going on.
There is, in other words, a disconnect between what you think you're doing and all the other stuff that goes on, both in your own phone and in other computers connected to yours, more or less continually.
But, as I said, both the book and the smartphone are really based on code, and If you are aware of this, you actually have more access to the inner secrets of the world of computers than I think most of you realize. But for this you have to learn to think in a particular way. This is something we will return to at the end of today's workshop.
For now, I'd like us to focus on what this might mean education-wise, specifically from the point of view of some basic premises of Montessori education.
What the book-smartphone comparison signifies is the difference between the physical world and the digital world, focused on the implements of learning. The book has always been at the center of learning in civilized societies. Now it is more and more replaced by computers and the Internet, which partly appears like a collection of ”books”, too. However, the Internet is emphatically not just a ”library”.
Some years ago a group called the Alliance for Childhood published a report named Fool's Gold: A Critical Look at Computers and Childhood. I've seen quite a lot of debate about this, and I'm not surprised to find it discussed by Montessori educators, because it seems to vindicate one of the basic principles of Montessori teaching. For example, on the web site of the Montessori Society in UK one can read:
"Neurological research confirms Montessori observation that different developmental issues are primary at different ages. In preschool children, sensory and motor skills, and the neural regions most related to them, are paramount. By pushing computer use at such a crucial stage for brain development, we are depriving your child’s intelligence of the actual food it needs for optimal growth. Fool’s Gold asserts that children need to learn their way first around the real world – ‘their bodies, their communities, nature - not cyberspace; they need hands-on experience, not simulations and content delivery, however rich in multimedia flourishes.’ At the time when the child’s brain needs to be absorbing how the natural world works, and adapting to human culture of its place and time, computer use can prevent the link."
Now, I'd like to put before you a rather subtle, but I think important question regarding this.
As you know, learning by touch and physical manipulation is basic in the Montessori class room, particularly where young children are concerned. This is used to learn abstract things as well, such as letters, numbers and so on. Think of the sandpaper letters used in preschool. Their purpose is to gain a muscular memory of the shape of the letters as a prelude to writing. So, the immediate link between bodily experience and the world of thought is promoted and recognoized as crucial.
If we apply this at later stages of learning, it would seem that the same basic orientation is applicable to the reading of ordinary physical books, but perhaps not when reading text on a screen.
[discussion:
- How important is what me might call ”bodily book learning”, the use of physical books, for higher education?
- If our students were to read only on screens, would something be missing?
- Is there something to be gained from reading physical books rather than on a screen?
- What's the difference in experience between reading a physical book and reading on a screen?
- How important is that difference?]
The difference may not lie only in the book itself, but in something about the external conditions of book learning.
From Canadian The Globe and Mail:
”On the second day of school, 11-year-old Oscar Judelson-Kelly was asked to find the population density of Hong Kong and the German word for “horse.” The assignment seemed easy enough, but then his teacher introduced an unusual caveat: No using the Internet.
Oscar and his classmates at Elizabeth Zeigler Public School in Waterloo, Ont., thought their new Grade 6 teacher was crazy. Those questions were just part of a list of 100 they had to answer, and without Google they didn’t know where to start.
“We were stunned,” Oscar said. “We didn’t know what to do. Some people didn’t take it so well.”
[…] Oscar’s experience may become more familiar. Assigning offline homework is part of a quiet revolt against computer-dependency in the classroom. Educators agree a Luddite future is not what they want: Digital literacy is essential. But teachers are finding ways to ensure their students hone critical thinking and curiosity skills that don’t require WiFi, and perhaps consider the possibility that Google isn’t omnipotent.”
Compare this news report:
Part II: The digital world today (and tomorrow?)
What our discussion of the differences between reading books and reading on screens highlights, in a very basic way, is that even though in both cases we read letters, we read text, the two media are very different. Reading on a screen only seems familiar. It is designed to feel familiar. But underneath, behind this familiar surface impression, something very, very different is going on.
This is a general point which is applicable to virtually all our use of computers.
From the user's point of view the programs we ordinarily use all function as amplifiers of some human ability or interest. They give us a sense of power. But that, unfortunately, is not all there is to it, because this means that computers and computer programs are themselves becoming more and more powerful, and already they can do some things much better than we can, as ordinary human beings.
In some instances we have in fact reached a point where it would be more appropriate to say that computers use us, than that we use them. One way in which they ”use us” is to make us keep up with them, which will soon become impossible if it isn't already. With the words of the American thinker Stephen L. Talbott, who has a background in the computer indistry: ”What we have made, makes us” (in his 1995 book The Future Does Not Compute.)
I'd like to be clear about one thing. The digital world is here to stay. It won't go away. It will become smarter and more complex, quite rapidly. I think that any radical opposition (”Do not use compurers!”) is meaningless. What is meaningful, however, is to learn to understand when the digital world impacts on us, as human beings, in a way that might take something important away from us. There is no hard and fast way to say in advance what that might be. It is quite possible, even likely, that many digital innovations will serve a creative and constructive purpose in one context, but be detrimental in another, perhaps even during the course of one day, in relation to one individual human being.
Consequently, what we need most of all is to know ourselves. Who are we, who do we want to be – in this rapidly changing world?
Confronted by any one technological product or innovation, we ought to ask: Why?
So, in this talk we will focus on what kind of world our children will live and work in, that is: what kind of world they will need to be able to find their bearings in, as human beings among a vast number of intelligent machines.
The crucial point here has to do with knowledge. Where do you find knowledge? One precondition of what we call civilization is libraries. This means that a large part of human knowledge is accumulated externally. And this means that the potential knowledge content of a book must be internalized again by a person, in order to become human knowledge. The Internet, with its vast stores of knowledge of every conceivable kind, is also external in this sense. But the Internet, and computer technologies generally, are not just ”libraries”.
When the historian George Dyson once visited Google, he asked why on earth are they scanning, digitizing, all those millions of books. The answer he got was that the intention was not so much that people should read them, but rather that a powerful Artificial Intelligence (an AI) should be able to read them. This budding AI also reads everything anyone writes using Google's various services. And why is that? It's because all this material will help the current and future Google AI to understand and mimic human natural language use. And perhaps you've found that Google Translate is actually getting better and better. It may even reach a point where the question might pop up: Why learn a foreign language in school, if you've got a machine that translates for you?
Math teachers have already, for decades, experienced an analogous question because of ever more powerful calculators: Why should you learn to do arithmetic operations, say, if you've got a machine that does it both faster and more accurately?
A couple of more examples: Recently IBM:s supercomputer Watson beat the best human competitor in the quizz show Jeopardy. (It has, however, been beaten by a human, so the race is on.) This sort of thing might also be a task for some upcoming Google AI: you put your question to it and then you'll get personalized help in answering it, based on what Google already knows about you (which is a lot, unless you have disabled or regularly delete cookies in your web browser, among other things you could do, if you don't want Google or other agents on the Internet to get to know you too deeply). Today you can already speak to Google Search, by the way.
On the Net there exist many partly autonomous programs that continually ”mine” its data flows, aggregating and evaluating enormous amounts of information more or less in real time. They keep tabs on your surfing behavior and, among other things, help show you the ads that might interest you, specifically. Other programs aggregate different kinds of news faster and more efficiently than any human journalist can do. The list goes on. And in more and more instances human beings are left out of the loop.
One response to this is to think that it makes it necessary for us to merge with our supercomputers, to ourselves become very advanced cyborgs:
[Questions for discussion, Part II:
Is there anything that humans should always learn and know and do, for themselves, no matter how good machines get at knowing and doing the same thing?
Why?
And the reverse: Is there anything human persons do not need to know anymore, because a machine knows it better, and, consequently, no time and effort should be wasted anymore by humans learning it?
Why?]
Part III: Being Human
In my first talk today I said that both the printed book and the smartphone are really based on code, and If you are aware of this, you actually have more access to the inner secrets of the world of computers than I think most of you realize. And I said that for this you have to learn to think in a particular way.
Think then, first, of a non-fiction book. Perhaps it's meant to be entertaining, but it also has an educational purpose. Its author wants you to learn something, so she takes great care to present her material in an orderly way. That is, she uses the alphabet and the components of language, grammar and vocabulary, the actual codes of language, to literally construct the book. But the construction itself, the readable book, resides at a higher level than the code as such. This level is about the book's overall message, its overall agenda, the overarching ordering of its contents. The linguistic codes are really just the necessary but not sufficient means to deliver that overarching, overall content, what the book is about, what you get from it as you read.
There's a quite close analogy here to computer codes and computer programs. In this context the overall content, the overarching purpose, is called a model. A model is an abstract ”picture” of the world, or, more accurately, some part of the world.
It's somewhat like the overall message of a non-fiction book, with the difference that the book's message (the book's ”model” if you will, its picture of the world) depends on you, a human being, to be properly deciphered. And this means that there is some leeway; different interpretations and viewpoints are possible. Human judgment enters the picture.
This is not the case with computer programs. They are totally, completely literal-minded. They are complete slaves to the model implemented in the program. (There are literal-minded physical documents too. Think of one type!) Take Facebook's model of friendship: you either are, or you are not. Or they can become more sophisticated, like dating programs, consumer profiling programs, or programs used in evaluating employees. Whatever the sophistication, however, most programs today are based on very strict and inflexible models. (Intelligent software has some capability of altering their underlying models, and this capability is increasing in some implementations of artificial intelligence, but regardless of what Kurzweil and the Russian enthusiasts say, they're a very long way from being human-like.)
Furthermore, because all of these programs function in a certain kind of economy, namely, a capitalist market economy, they are geared – directly or indirectly – to satisfying the need for profit, which is why they must be both fast and efficient, regardless of our ”slowness” as human beings. In other words, there exists in the world today, a ”model of models” which shapes what happens, and what can happen, to a very large extent. At its most basic this model of models is about book keeping. You don't want red figures on your balance sheet.
There is a very strong tendency for the digital world to turn us, as human beings, into components of itself, of its own functioning. For example, as I said before, the speed with which it operates tends to increase the speed with which we are expected to act and think. However, it is possible to turn this around and to learn to use it in ways that enhance human core values and meaningful human experiences. But that presupposes that we train ourselves, and our children, on our own terms, to be human. Our current and future technological environment forces us to become more consciously aware than ever before, of the question ”What is a human being?”. The very phrase Being Human was in fact the title of a very interesting report issued by Microsoft Research some years ago, warning of the unintended consequences of excessive computerization.
To get back to my point about models. Digital technologies are supremely flexible and adaptable, as such. Flexibility and adaptability reside most of all (but not exclusively) in the model stage of programming. This means that we as human beings have to learn to use our judgment in order to steer the digital world in our direction. And this means that we must understand what the digital world is and how it works. And we must, most of us, work towards using and developing it to our exclusively human, non-technological advantage. At present, I would say, this is often not the case.
In the year 2000 Bill Joy, cofounder of Sun Microsystems and an important contributor to the Java programming language, wrote an essay in the magazine Wired, titled Why the future doesn't need us, where he warned us about what would happen if we didn't manage to become the masters of our own creations. We would, more or less, become their slaves, or, as Stephen Talbott writes: ”What we have made, makes us.”
So, what I'd like to end with is to say this. Education today, more than ever, should not primarily be a matter of ”preparing our children for the job market of the future”. It should be about making them capable of contributing to a working and living environment worth living in, as human beings. This will not come about by itself, becuse of ”progress” or some such, lazy, illusion. It requires of us to learn and to teach one another, how to be human. Unless we do this we will become like our machines, like computer programs, modeled on them, modeled on models. We already are, to some extent, some of us more than others.
Machines are not curious.
Machines are not driven by imagination and ambition.
Machines cannot strive for a better society.
(Machines can learn a lot of things, some things much better than we can.)
But they can never become wise.
They can never exercise critical judgment, least of all towards themselves.
[discussion/summing up]
/Per
Note: Recent Norwegian research seems to confirm my suspicion regarding the unique laerning qualities of physical books: Skjønner mindre og leser dårligere på skjerm.
Comments