After the collapse: notes on the technological utopia (excerpts) - Jean-Marc Mandosio

Chapter 3 and portions of Chapter 4 from Jean-Marc Mandosio’s book, Après l'effondrement: notes sur l'utopie néotechnologique (Éditions de l'Encyclopédie des Nuisances, 2000), in which the author discusses the disastrous effects of what he calls “neotechnology” on the human species and how these disasters are imposed as wonderful innovations in all domains, from music and books to genetic engineering, resulting in a “four-fold collapse” affecting the human perception of time and space, the ability to think, and “the very idea of humanity itself”.

Submitted by Alias Recluse on March 2, 2014

Comments

Steven.

10 years 1 month ago

In reply to by libcom.org

Submitted by Steven. on March 3, 2014

Hey, as always many thanks for translating and posting this!

However, I was wondering why the given title here is in French, when the actual text is in English? As having a French title makes it look as if the text would be entirely in French.

If there is not a particular reason, would it be okay to edit this and translate this title into English as well?

Chapter 3: Neotechnological conditioning

Submitted by Alias Recluse on March 2, 2014

Chapter 3

Neotechnological Conditioning1 – Jean-Marc Mandosio

Reaction against machine-culture. The machine, itself a product of the highest intellectual energies, sets in motion in those who serve it almost nothing but the lower, non-intellectual energies. It thereby releases a vast quantity of energy in general that would otherwise lie dormant, it is true; but it provides no instigation to enhancement, to improvement, to becoming an artist. It makes men active and uniform—but in the long run this engenders a counter-effect, a despairing boredom of soul, which teaches them to long for idleness in all its varieties” (Friedrich Nietzsche, Human, All Too Human, 1880).

The development of neotechnology2 is by no means an ineluctable fate; “information society” is not the end towards which humanity is naturally tending, contrary to the view of those who, following the example of Alvin Toffler in The Third Wave,3 divide history into successive stages that proceed from the concrete (the agricultural “revolution”) to the abstract (the information “revolution”), with industrial society as the intermediate stage. Totally opposed to this conception, we affirm that our society’s adaptation to neotechnology is the product of a process of conditioning, one that conforms to the technology that engendered it. We shall set forth our arguments in this chapter, first criticizing the idea of historical destiny; then specifying the relation between technics, technology and neotechnology, which will then lead us to a more detailed demonstration of the fact that the critique of the latter two does not entail, contrary to what some people think, a “critique of technics” that, in its extreme generality, would be meaningless; finally, we shall examine in detail the modalities of neotechnological conditioning as the latter is exercised in two particular domains—music and books.

We have long been told about the ineluctable advent of the “third industrial revolution” and of the “information society” which is its alleged result. Since the late 1970s, the F.A.S.T. program,4 implemented by the European Union Commission in order to “help define European R&D (research and development) priorities for the purpose of developing a coherent long-term science and technology policy”, made the “information society” one of its three research priorities,5 under the pretext that “the informatization of society will be the great challenge of the next two decades”. Now that those two decades have passed, we can confirm that the “informatization of society” is an obvious reality, just as was predicted. This confirmation can give rise to two opposed interpretations:

a) This informatization was inevitable; the fact that the national and supranational institutions had foreseen it is a sign of their clairvoyance and their solicitude with respect to the populations for whom they are responsible;

b) This informatization is the result of a deliberate policy, which has imposed it by presenting it as something inevitable; the national and supranational institutions have planned it, and spared no effort to transform this plan into a reality; if it was necessary to engage in such planning, this is precisely because this development had nothing inevitable about it.

According to the first interpretation, technical development—which we must understand as a development in this particular direction—is fated: it involves the result and supersession of preceding stages of the evolution of the human species, regardless of subjective evaluations; all objections against it are therefore vain, in accordance with the old adage that “you can’t stand in the way of progress”. This interpretation implies that human history is oriented a priori towards a particular direction, regardless of anyone’s will, according to the process that Hegel called “the ruse of reason”.

The second interpretation, on the other hand, considers that historical development does not have any predetermined direction: this direction only becomes apparent afterwards, in conformance with the principle of post hoc, ergo propter hoc (“after the fact, and therefore because of the fact”), a error in logic that has been understood for centuries, which consists in confusing temporal sequence with logical causation. One would thus say, for example, that Christianity triumphed over ancient paganism because it had to triumph, but it is only afterwards that this necessity seems to be imposed as proof; before the victory of Christianity—which was not necessarily destined to take place—was a foregone conclusion, the only people who were convinced of its inevitability were those who fought on the side of Christianity, whose victory was considered to be inscribed in God’s plan. In the case of the “information society”, what is presented as an anticipation of the future is in reality the deployment of strategic decisions that are certainly not the product of chance or of some kind of destiny that rules over humanity.

To view the question in this way does not imply that we have succumbed to a paranoid view of history; it just means that we recall that history in general—and the history of technics in particular—is not the result of a process that unfolds autonomously, but a succession of actions taken and not taken, of conflicts and compromises, of individual and collective victories and defeats that have nothing inevitable about them. These actions, etc., are inscribed in certain conditions, conditions of particular times and places that determine, perhaps irreversibly, the existing possibilities for action and decision. Just as every political program—and the informatization of society, rather than a technical and economic program, is a political program—tends to be presented, since this is one of the preconditions for its effectiveness, as an inexorable fate: the saying, “There is no alternative”, which was the favorite motto of Margaret Thatcher, is the real leitmotif of all modern politics.

The most extremist opponents of technological development share with its advocates the fact that they, too, are just as convinced as the latter of the inevitable nature of this development. The idea that technology is the destiny of the contemporary world became generalized after the Second World War; this idea can be found (although subjected to different kinds of analysis) in the work of Martin Heidegger, Günther Anders and Jacques Ellul, as well as in the titles of certain works that addressed these questions, such as: Le Destin technologique (Balland, 1992) by Jean-Jacques Salomon, Il Destino della tecnica (Rizzoli, 1998) by the Italian philosopher, Emmanuel Severino…. The conviction that this is an inevitable development reduces all attempts to oppose technological conditioning to nothing but a refusal to comply, the prelude to a resignation that has exactly the same practical effects as acceptance: one allows it to happen without any objections, and finally reluctantly adapts to it. This is the attitude that Leibniz, in his Theodicy, criticized under the name of Fatum Mahometanum:

“Men have been perplexed in well-nigh every age by a sophism which the ancients called the 'Lazy Reason', because it tended towards doing nothing, or at least towards being careful for nothing and only following inclination for the pleasure of the moment. For, they said, if the future is necessary, that which must happen will happen, whatever I may do…. The false conception of necessity, being applied in practice, has given rise to what I call Fatum Mahometanum, fate after the Turkish fashion, because it is said of the Turks that they do not shun danger or even abandon places infected with plague, owing to their use of such reasoning as that just recorded.”

On the opposite side of the spectrum—at least in appearance—we find those who, like Jean Jacques Salomon, assert that, “there is nothing inevitable about technological change” (which did not prevent him from giving his book the title, Le Destin technologique) nor is there any such thing as “technological determinism”, and they rely upon this alleged indeterminateness to encourage a “democratic” participation of the population in decision-making processes, for the purpose of enforcing a “regulation of technical change”, whose guiding slogan is: “The control over technology is everybody’s business.” It is somewhat similar to the “citizens control” that some people are demanding with regard to the World Trade Organization, as if a “democratic regulation” of capitalism were not, by definition, a game rigged from the start.6 And Salomon’s proposals are revealed to be just as vapid once they are examined, because the margin for maneuver that they define is finally, setting aside all reassuring slogans, very narrow, and is reduced to mere supplementary measures for a technical change that becomes—contrary to the initial postulate of the book—the destiny that he did not want to admit in the first place:

“Prometheus, although shackled [by ‘democratic control’], will always advance without being slowed down by any obstacles, but it is up to us alone to make sure that his artifices are the work of Prudence rather than Thoughtlessness. The dynamism of technical change inexorably defines our future. The support for which will be the purpose that will decide the opportunities that democratic societies will have in order to confront, within social harmony, the technological changes of tomorrow. The fears it inspires are only equal to the possibilities it offers….”

For Salomon, it is therefore not a question of reversing the course of a technical development concerning which he nonetheless points out its universally disastrous implications, but simply of “confronting, within social harmony” what “inexorably defines our future”—in other words, our destiny.

In the same way, Dominique Bourg, in L'homme artifice (Gallimard, 1996), begins by denouncing, with more than enough reason, those who “see technics as a fatal destiny”, declaring that “this concept of technics can lead to nothing but extreme passivity: for what purpose should we try to affect this or that aspect of social development, if we are the playthings of an implacable destiny?” But he comes to the same tepid conclusion as Salomon: “We must … more than ever before redouble our scientific and technical efforts to confront the various environmental crises, in order to control the consequences of our own actions.” We have to distance ourselves from “an indefinite extension of artificialization”—since it is “neither desirable nor possible to endlessly pursue such a program” by following a new way, which would take into account our “responsibility” in relation to “that part of nature that is the biosphere”. But this magical reversal is an intellectual projection, whose real purpose is to gild the pill of technological development in order to make it easier to swallow. The alleged choice between the two directions proposed by Bourg always makes the continuation of “scientific and technical efforts” appear to be an inevitable future—efforts that Bourg even wants to see “more than ever redoubled”. We therefore once again find ourselves faced with a false alternative, according to the classical model set forth by Anders: “For their last meal, those who are condemned to death are free to choose whether they want green beans with sugar or with vinegar.”

Another feature that Salomon’s doctrine shares with so many others is the fact that he thinks that the only problem posed by technology is the “specter of the greatest catastrophe ever produced by the hand of man”. In his view, this is what “sows the seeds of doubt about the very foundations of the rationality of industrial societies”. This assertion is correct. It is no less true that due to his concentration on the extraordinary dysfunctions that constitute catastrophes properly speaking, we do not want to see—but this is precisely the question that he wants to avoid posing—that the normal functioning of a society ruled by the imperatives of technology is already itself a catastrophe, only one that proceeds a little more slowly.

***

Before we address the question of neotechnological conditioning as such, we shall specify just what we understand by the terms technics and technology. One of the most striking characteristics of the abundant literature devoted to “technics” is that fact that the very concept of technics is almost never defined, as if it were something that was self-evident; but this is far from being the case, and this domain is usually obscured by a certain haziness that fosters misunderstandings. We must therefore make some indispensable clarifications, which will lead us rapidly to the heart of the problem.

The term technics, in its most generally accepted meaning, designates all procedures (by which we mean standardized processes) that allow us to implement certain measures for the purpose of achieving a goal. Opening a bottle with a corkscrew is a technical operation, just like the emptying of a gigantic oil tanker, shifting gears in an automobile, or the solution of a math problem by an elementary school student. There are simple and complex technics. The latter “require … affluent technics … with whose combination a well-defined technical action is carried out” (Bertrand Gille, Histoire des techniques, Gallimard, 1978). One may then speak of a technical totality, in which “each part is indispensable in order to achieve the result”. Gille offers the example of the process of smelting metal ores, which requires the contributions of a large number of factors in order to be carried out: “problems of energy, the problem of components, minerals, fuel, oxygenation, the problem of the instruments themselves, the blast-furnace and its special components, metal plating, insulation, moulds.” At a more general level, a technical system includes (again according to Gille’s definition) “all the technics [that] are, to various degrees, dependent upon one another and [which display] among themselves a certain coherence”. In order to represent a technics, regardless of its particular nature, in its real complexity, we must take into consideration the technical system in which it is inscribed and which makes it possible. And a technical system is never exclusively technical, but is always also economic, social and political, since it is understood that the interdependence of technics within any given system is itself inscribed in a totality of economic, social and political relations. (We shall leave aside the question—which we consider to be somewhat analogous to that of the chicken and the egg—of whether or not one of these instances is determinant in relation to the others.)

A technical system, obviously, is never neutral, since it is indissociable from an economic, social and political totality. It is correct to say, as Anders said in Die Antiquiertheit des Menschen (“The Obsolescence of Man”, 1956), that

“Each instrument is not, for its part, anything but a part of the instrument, nothing more than a screw, one instrument among others; a piece that, in part, responds to the needs of the other instruments and, in part, imposes in turn, by its very existence, the need for new instruments on the other instruments. It would make no sense at all to say that this system of instruments, this macro-instrument, is a ‘means’ that is at our disposal so that we can choose our goals. The system of instruments is our ‘world’. And a ‘world’ is not the same thing as a ‘means’.”

The individuals who coexist in any given society never find themselves in a situation of open choices, but are determined to one extent or another. There is no such thing as absolute autonomy, whether in relation to technics or to anything else; it is an intellectual projection. There are, however, technical systems (and therefore, indissociably, economic, social and political systems) that render individuals more autonomous than other systems. The loss of autonomy represented by the advent of machinery, for example, is incontestable:

“We need only think of the psychological and physiological hardships that are entailed by the processes of large-scale industry: by subjecting labor to the standardization of working hours and the rhythms of the work process, the respect for order and hierarchy, the economy of gestures and words, meant to impose a veritable industrial straight-jacketing via discipline. And the division of labor, which came long before industrialization, would accentuate, simplify and fragment the tasks of production, changing the very content of the job, which became increasingly more parcelized, repetitive, uninteresting, the source of a new kind of fatigue that was less muscular and more nervous.” (Jean-Jacques Salomon, Prométhée empêtré. La résistance au changement technique [1982], Anthropos, 1984.)

The expression, “technical environment”, often employed to designate the technical system of the industrial era, is deceptive, since it tends to assimilate technics and machine industry. The preindustrial world was no less of a “technical environment” than the industrial world (one may thus seriously speak of “the industrial revolution of the Middle Ages”); it was, however, a different “technical environment”, which was undoubtedly—to borrow the expression of Anders—a “world”, but it could not yet claim to be the world, strictly speaking. The system of artifacts had not yet been imposed as a second nature: there was still a world that was external to the “technical environment”; the very existence of nature was proof of this, a self-evident fact. It is the nature of machine industry to gradually replace the world; it is in a way programmed to make nature disappear and replace it with an artificial world, and ultimately to replace humanity (a regrettably “natural” species) with a new, semi-artificial species.

It is undoubtedly this confusion between machine industry and technics that sometimes leads those who are actually—like Anders and Ellul—hostile to machine industry to declare their hostility towards “technics”. To say that one is “against technics” makes no sense; it would be like saying that one is “against food” or “against sleep”. The “radical” dream of an entirely autonomous individual who is totally disencumbered of technics is senseless. Without technics, humanity would disappear; which does not mean that all technics are valid, nor that technics is the essence of the human species. It is simply one constitutive element of humanity among others. The critique of machine industry from the perspective of the disalienation of post-industrial humanity thus does not have the final goal of the suppression of “technics” in general, but the replacement of a particular technical system—our current one—with a less alienating technical system (assuming that the total absence of alienation, that is, pure autonomy, is impossible). Whether or not this is actually possible is another question, but first we have to avoid being mistaken about what is at stake and not speak in vain.

Technics in general is often confounded with technology. This term at first designated the discipline whose object was the study of technics. But it has come to designate what is also called technoscience, that is, a stage of the development of technics in which the latter has ended up being confused with science—which is a recent historical phenomenon—and in which science and technics mutually legitimate each other. Jean-Pierre Séris, in a work that is in other respects dubious (La Technique, P.U.F., 1994), has described the contradiction inherent to the use of this term:

“The word technology is used because it seems to possess a dignity that technics does not possess…. what is added in the word ‘technology’ is the suffix, derived from logos (= reason, discourse), the reference to the logical, discursive, rational, scientific dimension…. technology … eventually only designated technics in general, but also is taken to constitute the hard core of all technics, the essential model and complete, perfected and ultimately fully intelligible form of the technical phenomenon…. But the ubiquity of technical objects, and of dense networks of technical relations, does not mean that we have to carry out delicate, skilled and difficult operations in order to use them…. We live in a world where the accumulated ‘capital’ of technical knowledge is colossal, and at the same time, we have less need of technical knowledge than our ancestors…. Everything takes place as if the most economical and effective way of proceeding was to leave ‘technology’ to the technicians or the technical specialists. Technology is someone else’s affair…. The contemporary homo faber is himself technologically exempt, as an individual, from being a technician…. Technology, from this perspective, is the name of the technics of which we feel ourselves dispossessed. Technology takes place outside of us, without us.”

The term “technology”, far from indicating a greater mastery of technical rationality, has therefore finally come to designate the opposite: “a technics that has lost its logos … transformed into the incommunicable and strange” for nonspecialists, and which sometimes arouses veneration and “blind faith [in] the efficacy of technical resources”, and sometimes an anxiety generated by “the feeling of dispossession in the presence of our ‘technocratic’ surroundings”.

The mystification—the “bluff”, as Ellul called it—inherent in the employment of the term technology, its ideological character, far from disqualifying its use, must to the contrary, we believe, legitimate it; for that is just the meaning to which the term technology lends itself: the real dispossession is accompanied by an imaginary transfiguration, so that the modern individual, totally powerless in the face of the instruments that constitute the environment of his everyday life (automobile, computer, dishwasher, stereo system…) and which are, as far as he is concerned, largely black boxes, magical apparatuses that work without his understanding how, and which then mysteriously break down without his knowing how to repair them, this modern individual therefore believes that he is invested with the powers of an omnipotent demiurge of technoscience the minute that he turns the key to his air-conditioned car’s ignition or connects to the Internet.

The ambivalence of the effects of technology on individuals was already described by Horkheimer and Adorno during the 1940s:

“While the individual disappears behind the apparatus that serve him, he is more than ever before at the mercy of this same apparatus. The state of injustice, powerlessness and malleability of the masses grows at the same time that the quantities of goods assigned to them increases. The wave of necessary information and domestic entertainment make men more naïve at the same time that it brutalizes them.” (The Dialectic of the Enlightenment, 1944.)

There was much more widespread technical mastery in the everyday or professional life of individuals prior to the era of technology than there is in the so-called “technical environment” of industrial society, where the transfer of responsibilities from man to machine is obvious. Nietzsche observed that the machine of the industrial age “humiliates” the human being:

How the machine humiliates – The machine is impersonal, it deprives labor of its dignity its individual qualities and its defects that are characteristic of all labor that is not done by machines—therefore, a portion of humanity. In other times every purchase made from artisans was a distinction conceded to a person, whose signs were all around him: so the usual objects and clothing became symbols of mutual respect and personal affinity, while today it seems that we live only amidst an anonymous and impersonal slavery. The lightening of the burden of labor must not be bought at so high a price” (Human, All Too Human).

Anders, for his part, in The Obsolescence of Man, evokes the “Promethean shame” of the individual reduced to being nothing but an interchangeable cog within a gigantic apparatus of production and consumption. In this role, the human being is clearly revealed to be inferior to the machines, and this is the source of his inferiority complex: the shame of not being efficient enough, of having mood swings, of aging. It is no longer the machine that serves man, but man who is becoming the servant of the machine. Transformed into a product of his own products, he has come to attribute to machines an absolute power that he does not possess—but it must be recalled that the machines do not possess it either. Hence the idea that slavery to machine industry is the destiny of the human species; hence, too, the very widespread notion, formulated in 1964 by Dennis Gabor in a work entitled, Inventing the Future, that “everything that is possible will necessarily be realized”. This formula, understood literally, is false: technicians do not realize “everything that is possible”, but only that which they have been seeking to realize for a very long time. Many things that are possible, with regard to technical matters, are not addressed at all, many paths are not pursued, not because they are “dead ends”—isn’t the current course followed by technical development a dead end?—or even because these things are “unprofitable” (the development of cable television or the cell phone were not commercially profitable), but because they did not want to go in that direction.

The technological orientation of our society is not, contrary to what Hans Jonas claims in Das Prinzip Verantwortung [published in English under the title, The Imperative of Responsibility: In Search of Ethics for the Technological Age], “a revolution that no one has planned, totally anonymous and irresistible”. It appears to be irresistible, as in the case of the rise of Nazism or Stalinism, only because the populations involved did not know how to or were incapable of resisting it. If technology today appears to be an irresistible force, a destiny, it is above all because its advocates have known how to make it irresistible (nuclear power is the most obvious example). And this process has not been “anonymous”: neither the atomic bomb, nor computers, nor nuclear power plants, nor the Internet, nor the deciphering of the human genetic code were born spontaneously; all of these things were the results of programs pursued for decades, often initiated by the state or benefiting from its massive support, as we showed at the beginning of this chapter. Thus, so that the use of the Internet would become generalized, they had to install—funded by credit—infrastructures (high capacity fiber-optic networks), the famous “information highways”, and it was states that assumed responsibility for this, precisely because this stage of the establishment of these networks was not profitable. In the past, the rail networks, the highways, the electrical and telephone grids were not born, either, from chance or from some kind of unconscious collective labor. The cities and the countryside only became what they are today because their transformation was planned in research departments. And even the first industrial revolution forced a large number of the inhabitants of rural societies to abandon the countryside and go to work in the city, in the new factories. Obviously, it must be pointed out that these different programs did not always obtain the desired results, that the predictions involved were often frustrated, and that they are—like every self-respecting plan—constantly re-adapted. We should also note the play of the relations of forces between the different social groups, in order to dismiss the simplistic idea of the existence of a “mega-plan” that would itself orient technological development as a whole: what exists, are various plans with diverse and sometimes conflicting orientations. We can summarize this by way of a formula: with regard to technological questions, everything that is planned does not necessarily come to fruition, but everything that is realized has been planned.

Technology is no less an ideology than it is a technics; it is “ideology materialized”. (It is therefore vain to attempt, as some authors do, to separate the technicist ideology from technology as such on the pretext that the latter is nothing but a neutral “tool”.7 This ideology has transformed the world in such a way that it has been imposed, in the eyes of both its advocates as well as its opponents, as the only possible world, thus becoming the truly dominant ideology. All references to realities external to this world—and especially to the idea of nature—are stigmatized as unreal, and therefore ruled out, the non-existence of nature in turn confirming the identification of the world of technology with the world in general. A particularly notable example of this de-realization is offered by Jean-Paul Curnier with regard to the notion of the “rural landscape” (La Tentation du paysage, Sens & Tonka, 2000). This author explains to us that, “there never was a rural landscape”, because

“the rural is the myth as reality (or the mythical image par excellence) for a type of civilization devoted to transformation and change…. It will therefore have to be admitted that the ‘rural world’ has never existed and that it has also existed forever since it is a topic of discussion, as an always lost world, as the ongoing presence of a loss, as the scene of the drama of consciousness. The painting is a mental object, as Leonardo da Vinci would say; inversely, the always reinitiated attempt to make a mental image of the immutable coincide with the material reality of the countryside, just like the feeling for origins that is triggered by our view of the countryside, forces us to consider that what we call the countryside or the rural world is even more of a mental object.”

Curnier insidiously proceeds from the (undeniable) fact that the “rural world” has given rise, all throughout history, to nostalgic representations, to the assertion of the non-existence of the rural world outside the mind. This assertion is the direct consequence of the philosophical postulate of deconstructionism, according to which the truth is an imposture. Logically, Curnier defines the truth as “the inverted figure of what the need for it gives rise to within us”; as a result, “everything is in principle a simulacrum, beginning with the truth itself”. The explanatory reasoning merits more extensive quotation:

“Being the metaphor for a reconciliation of the lost unity of man and the world in the very principle of the truth, the simulation is not an avatar of the authentic, a secondary form, but the very horizon of the truth, that is, of the production of metaphors considered to be increasingly more necessary due to the growing importance of the intellect in human activities. As human activity becomes intellectualized, so, too, does the drama of separation become more active, stealthily, for the same reason; and the more acutely, as well, is the need felt for metaphors of authenticity. To the point that the artifice is no longer distinguished from authenticity from the moment when both are equally experienced as surrogates; to the point where the truth is judged by its immediate efficacy as a metaphor rather than as the illusion of transcendence. The proliferation of (increasingly more circumstantial and obsolete) truths, or more precisely, the effects of the truth, merely registers the progress of the anxiety of separation and the madness of the need for reconciliation.”

The notion of truth has no positive consistency at all—in this sense, it is an “illusion of transcendence”, the illusion that there is something external to the human psyche; to the contrary, the desire for truth is explained by psychoanalytic reasons. And here we meet up again with postmodern relativism; and it is not just a coincidence that the development of this philosophical enterprise of de-realization of the world is contemporaneous with the emergence of technology as a substitute for the world, as the sole and exclusive “real world”. Thus, Séris, in his book on technics quoted above, is incapable of conceiving of the possibility of the existence of something other than industrial food: the “nostalgia for a nature that has disappeared” gives rise to the creation of “more natural ‘natural’ or ‘light’ foods … industrially manufactured, fabricated, preserved … based on proteins, vitamins, caloric content, entities that are clearly natural, but which nature is incapable of providing in this form”. The existence of ham, milk or artichokes in any other form than the high-tech substitutes that have replaced them is something that is literally unthinkable for this philosopher, and you need to be an anthroposophist or something like that to practice an “organic” form of agriculture that responds to other criteria than those of industrial yields.

If relativist idealism dissolves the non-technological world into a representation, it also de-realizes technology because it does not perceive anything else but precisely spirit and representation where unrepentant materialists see objects that could not be more concrete. Thus, for the propagandists of the cyber-world, a computer is not an object, but an immaterial entity, or, to put it another way, a spirit—hence the expression, ghost in the machine, coined several decades ago to describe the computer—likewise, the Internet is, according to Pierre Lévy, a “collective intelligence”, and for the children who are now learning how to use it, it is not unlikely that the Internet will become “the world soul”. Now you can read, no less—in the Senate Report on the T.G.B.N.F.8 —“if it is not on the Internet, it does not exist”.9 And Michel Serres jubilantly proclaims:

“… today, our memory is in the hard drive. Also, thanks to computer programs, we do not need to know how to calculate or use our imagination. The human being has the faculty of delegating the functions of his body to objects. And this gives him the opportunity to do other things…. Tomorrow, the body liberated by the new technologies will invent something else.” (L’Expansion, July 20, 2000.)

A philosopher as rigorous as Michel Serres is required to place his faith in the power of “invention” that will be capable of sparing the finally “liberated” “humans” from having to exercise their memory or imagination, and who will have to operate—an impossible task, since they will have lost, together with their memory and imagination, all ability to reason and calculate—a complicated electronic apparatus every time they want to access those faculties that were so conveniently delegated to computers. When Serres claims that “informatics calculates, memorizes, and even makes decisions for us”, he is only taking literally (and clearly not in an innocent way) the anthropomorphic metaphors that identify the computer with a human being:

“If one fully grasps the fact that the computer has a will, intentions, reasons—which implies that humans are freed from all responsibility in relation to the decisions of the computer. By means of a curious form of grammatical alchemy, the phrase, ‘We use the computer to calculate’, comes to mean ‘the computer calculates’. If a computer calculates, then it can decide to err or not to calculate the entire sum. This is what the bank employees are telling you when they say that they cannot tell you how much money you have in your account because ‘the computers are down’. This implies, of course, that no one in the bank is responsible. (John McCarthy, who coined the term, ‘artificial intelligence’, proclaims that ‘it can be said, even concerning the most simple mechanism like a thermostat, that it has opinions’. In response to the philosopher John Searle, who asked the obvious question, ‘Just what opinions does your thermostat have?’, McCarthy said, ‘My thermostat has three opinions—too cold, too hot, and the correct temperature’.) (Neil Postman, Technopoly: The Surrender of Culture to Technology, 1992.)

If the intellectual mystifications of a Serres have so much success in the media, this is because his discourse is optimistic and assures us that we are on the right road; anyone who criticizes this optimism is—according to his own expression—an “old crank” who thinks that “everything was better in the old days”, as if, living in the best of all possible worlds, we have no other choice than either the blessed acceptance of what exists or the nostalgic idealization of a dead past. And since it is the nature of the “old” to disappear with great rapidity, the young people to whom the future belongs will be able to listen to repeated recordings of the interviews with Michel Serres in order to completely surrender to work when the philosopher is no longer there to encourage them in real time.

Technology has led us, almost imperceptibly, to neotechnology. Neotechnology is an avatar of technology, based (ideologically) on cybernetics and (practically) on the mathematical theory of communication; its key point is the codification of information in a digital form, and its main characteristic is the fact that it is nothing but a means whose only end is itself: the “communication” in question here is not the communication of something, but the communication of communication—the confirmation that there is full communication, that there is a transmitter and a receiver, with no other purpose than “communicating”. (The use of the intransitive form of the verb, which is a novelty in French, clearly indicates that this communication is above all a communication without an object.)

If it has only taken a few decades for computers and other robots to cease to appear as disturbing automatons and to become instead the ordinary companions of everyday life, this is because social relations were simultaneously systematically disintegrated. Why would one want to take college courses, purchase train tickets or check your bank balance by Internet, without leaving your home? Because going to the supermarket, to a train station or a bank is an experience that has nothing pleasant about it, and because the person whom one faces in a supermarket, a train station or a bank is no longer anything but a humanoid automaton. One therefore comes to prefer the coldness of the relation with a machine over the coldness of human relations. And, due to the lack of human friends in a society where individuals are increasingly more separated and where the other is only perceived as a threatening entity, computers—having assumed more agreeable forms than they had in the past—become substitute “friends”. The Japanese, who are far ahead of us with regard to dehumanization, first invented the Tamagotchis, virtual creatures that call out for the attention of their owner if he has not remembered to feed them at their (virtual) dinner time; then, they introduced electronic dogs and cats, clumsy imitations of pets:

BN-1 reacts to being petted and is capable of learning how to play. Do you remember Aïbo, Sony’s robotic dog? Well, here is the cat, Bandaï version, christened with the lovely name of ‘Communication Robot BN-1’. The result of five years of research in artificial intelligence. For BN-1 (for short) is planned to be more communicative and less expensive than his rival, the dog Aïbo: he will cost ‘no more than’ 3,000 francs. BN-1 has a torso equipped with technologies that allow it to be autonomous in its movements. In order to be recognized by the little creature and to play with it, its master must use a transmitter on a necklace. Thanks to sensory receptors, the robotic cat is even capable of simulating reactions to being petted. A very feline ‘pheromone receptor’ even allows it to communicate and play with his fellow android-cats. But BN-1 desires above all to become man’s best friend. He will evolve and ‘grow’ depending on how much attention his master devotes to him. The most demanding masters could add new behaviors by using the two software programs with which he is endowed at the same time.” (Transfert, Summer 2000.)

The nightmare of Philip K. Dick is now almost a reality:

“After a hurried breakfast … he ascended clad for venturing out, including his Ajax model Mountibank Lead Codpiece, to the covered roof pasture whereon his electric sheep ‘grazed’…. it lay ruminating, its alert eyes fixed on him in case he had brought any rolled oats with him. The alleged sheep contained an oat-tropic circuit; at the sight of such cereals it would scramble up convincingly and amble over…. Owning and maintaining a fraud had a way of gradually demoralizing one. And yet from a social standpoint it had to be done, given the absence of the real article. He had therefore no choice….” (Do Androids Dream of Electric Sheep?, 1968.)

(All these strange inventions that now constitute our everyday fate arouse no surprise at all when they appear on the market, since they have been banalized—in some cases for decades—by science fiction novels. The authors of these novels are not prophets, but—among the best of them, anyway, like Philip K. Dick—careful observers, who only extrapolate on the basis of the reality around them. They thus shed light on the latent possibilities that are yet unperceived, which form part of what one could call the conscious imaginary of our society; the writers and filmmakers of science fiction have the function of bringing it up-to-date, which allows them to adapt to ongoing changes. The reader—or, more often, the spectator—becomes accustomed to frequenting implausible, paradoxical, and unexpected universes, which serves to considerably attenuate his famous “resistance to technological change”, that force of inertia so feared by technocrats, who detest more than anything else seeing how this force stands in the way of the implementation of their innovations. Which is why the media, taking over from science fiction, tells us immediately, as soon as a new “generation” of computers, cell phones or satellite-guided cars becomes “operational”, that the next generation is already in the pipeline and that we have to wait for this immanent “revolution” to once again shatter all the ideas we take for granted; and this is why they have been describing for us for decades, at regular intervals, “how we shall live in the year 2000”, “in 2015”, “in 2025”, etc. The fact that the predictions usually turn out to be entirely false is of no importance at all; what matters is to accustom people to the idea that tomorrow will be very different from today, and that this difference is the product of an inexorable evolution, concerning which the metaphor of the succession of “generations” demonstrates both its natural as well as its necessary nature.”)

It was also in Japan where, about ten years ago, “otakuism” was born, an expression designating the “vicarious lifestyle” of the otakus, young people who are permanently immersed in a world that is almost exclusively composed of video games and mangas. In France, computer-assisted autism is beginning to spread alarmingly, if we are to believe this survey (Libération, August 8, 2000):

“32% of French respondents declared that they felt they were capable of living isolated, for one month, in an apartment with a PC and an Internet connection as their sole companion…. Our European neighbors are much less tempted by such an experience. For Intel (the corporation that had conducted the study), it is proof of the beginning of a ‘real love affair between the French and the Internet’.”

In order to understand how we managed to reach this point, we have to take into account the results of a poll conducted by INSEE published in Le Monde (March 2, 1998) under the eloquent title: “1983-1997: The French Are Talking Less.”

The case of the Internet is similar to that of the cell phone or the electronic pets. It is always about satisfying a basic desire for effective relations and for communication while keeping other human beings at a distance—with whom one is, of course, in constant but always indirect contact, via telephone or the Internet—or just doing without them altogether. And now we can see young zombies in love with Lara Croft, the heroine of a computer game now transformed into the first virtual “star”, or with the “beautiful Ananova”, an anchorwoman on a news program televised over the Internet:

“Ananova was born in the British press agency, PA (the Press Association)…. It was thought that her face would have a ‘global appeal’—a worldwide charm…. the trailblazers of PA New Media attempted to give her character three traits: ‘believable, trustworthy, and a face that stands out in the crowd’. They needed a striking persona. That is why they gave her green hair, which you immediately notice … and especially the composition of a personal legend. Ananova is a modern, ‘girl about town’ … and single. ‘They have received two million messages from all over the world. Not only to show pictures of people who look like her, either. On Valentine’s Day, Ananova received love letters, and even one letter asking for her hand in marriage!’ The great unknown is the star’s body…. It has not yet been shown to the public, but it exists. Like the face, it was conceived ex nihilo, by superimposing mock-ups, photos and sketches of feminine stereotypes, from Marilyn Monroe to today’s fashion models. Whereas the virtual anchorwoman of Channel 5 was modeled on a real woman, scanned from head to toe, Ananova was invented from many different pieces.” (Transfert, Summer 2000.)

It might be objected that this is nothing new; it was already even in the novel: Don Quijote and Madame Bovary confused the real world with that of the romance novels or the stories of valiant knights, and the preachers of centuries past never ceased to condemn the pernicious reading of novels, providers of bad examples. Realistic graphic representations also produce such effects. Plutarch recounts how one of Alexander’s generals suffered convulsions when, after Alexander’s death, he saw a painting of his king; he thought that he had seen a ghost. The feeling of the surreal that is provoked by computers, and especially by the Internet, is not, however, an exceptional phenomenon that only affects particularly infantile or fragile persons; it is the rule rather than the exception. Already, during the great era of the cinema, the spectators fantasized about the stars manufactured for that purpose, on the basis of a human substratum that is no longer considered to be indispensable.10 This sense of the surreal is much closer to the religious sentiment than to the identification aroused by fictions and representations. The Internet is neither a fiction nor a representation, and this is what gives it its power. Just as, for the Christians, the life, death and resurrection of Christ were not just fables like the battles of the gods and the titans were for the Greeks, or the stories of the love affairs of Zeus; it was a reality, a historical fact that really took place. And it was also the prospect of a redemption of humanity, the overcoming of human imperfections in the City of God. The same thing is happening today with the Internet:

“Cyberspace consists of transactions, relationships, and thought itself, arrayed like a standing wave in the web of our communications. Ours is a world that is both everywhere and nowhere, but it is not where bodies live. We are creating a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth. We are creating a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity. Your legal concepts of property, expression, identity, movement, and context do not apply to us. They are all based on matter, and there is no matter here. Our identities have no bodies, so, unlike you, we cannot obtain order by physical coercion…. We will create a civilization of the Mind in Cyberspace. May it be more humane and fair than the world your governments have made before.” (John Perry Barlow, "A Declaration of the Independence of Cyberspace", 1996.)

The Internet is the dumping ground for all the utopian ghosts who cannot find anchorage in our concrete world, definitively deprived of other places, of a virgin space where everything could become possible again. So cyberspace is considered as the “new frontier”, replacing the dreams aroused by the conquest of the wilderness in the American West and the subsequent conquest of outer space which did not take long to grind to a halt. The Internet also appears as a world ruled by the “economy of the gift”, the realization of the aspirations of the “anarcho-communists” of the sixties:

“Even selfish reasons encourage people to become anarcho-communists within cyberspace. By adding their own presence, every user contributes to the collective knowledge accessible to those already on-line. In return, each individual has potential access to all the information made available by others within the Net. Everyone takes far more out of the Net than they can ever give away as an individual…. the gift economy and the commercial sector can only expand through mutual collaboration within cyberspace. The free circulation of information between users relies upon the capitalist production of computers, software and telecommunications…. Within the digital mixed economy, anarcho-communism is also symbiotic with the state…. Within the mixed economy of the Net, anarcho-communism has become an everyday reality.” (Richard Barbrook, “The High-Tech Gift Economy”, 1998.)

Once again, the invisible hand is there to bring about a magical convergence of egotistical interests and public prosperity, and as a bonus also brings about the resolution of all the contradictions of our unfortunately material world: capitalism and the gift economy mutually reinforce each other, anarcho-communism and the state work in concert…. It is a formidable vision, and it is all the more beautiful in that it, unlike Christianity or the classical utopias, is not a vision of the future, but instead is a discourse that claims to describe a reality that already exists; this pie-in-the-sky country exists, all you need to do is to connect in order to live there eternally on love and fresh water. The “anarcho-communists” who advocate this ideology perform a major service for the state and corporate promoters of the Internet, since it is precisely by presenting the Internet as this new “country of marvels” where everything is free that one instills in people the need to obtain the computer equipment necessary for connecting, confident that once they have been hooked, they are hooked for life.

Each new tool of neotechnological alienation is presented, right from its debut, as another step towards individual autonomy and the realization of all our frustrated aspirations: with the cell phone, one can be located anywhere, one is sure of never being alone; with the Internet, real life is here, twenty-four hours a day, in a much more exciting way than the miserable daily life of the middle class bachelors who constitute—together with children—the “targets” of neotechnology. The aficionado of specialized pornography and the collector of postcards that depict Queen Victoria, the fan of The Avengers and the devotee of tattoo art can communicate in “real time” with their counterparts all over the world. As one recent announcement put it: “on the Internet, you are the only limit.” After all, you have to set aside a few hours to sleep now and then, at the risk of missing impassioning discoveries and conversations. And this is how the promised liberation once again leads to the “Promethean shame” that Anders described, this time born from the confrontation between a mere mortal and a supposedly eternal and indestructible network.11

But the public relations arguments that praise the merits of the cell phone or cyberspace are nothing but another aspect of the “hidden persuasion” that is being exercised. Thus, the cell phone, that “nomadic” appurtenance that follows the individual wherever he goes, actually entails more of a loss rather than an increase of autonomy. From the moment when the possibility of constantly being located exists, it is transformed into an obligation; in numerous professions, it is inconceivable that one should not be able to locate a “colleague” at any time, wherever he may be. And that instrument—just like the credit card—is an effective means of surveillance of the movements of an individual, a fact which has not gone unnoticed by the police. The digital identification of telephone data allows the shortest phone call to be traced and permits the content of conversations to be very easily recorded (e.g., the nightmarish system of total control over worldwide conversations over the phone and the Internet, launched by the Americans under the name of “Echelon”); one may also purchase, over the Internet, theoretically illegal telephone bugging devices that are easy to install. The Internet, for its part, is also an effective system of control. The sites you visit even leave a trace in the web surfer’s computer: these “electronic snitches” called cookies are information files that serve as the basis for data collection, used by advertisers to “target” advertising according to the “profile” of the users. And the surfer rapidly learns that you have to pay for what is free: since it is not only the Internet that is not free—contrary to what those who use it at their workplaces may think,12 overlooking the fact that they are not actually “surfing” for free but because their employers bear the expense of the connection, subscribe to pay services, etc.—but even the apparently “free” sites are actually financed by an invasive advertising, with pop-ups in flashing colors (and which, undoubtedly, will soon feature an audio component), which are hard to ignore. A telephone service provider also recently proposed to offer free communications to its clients, interrupting the conversations at regular intervals with advertisements.13 Finally, we must not forget that the promoters of the cell phone and the Internet practice, as a matter of principle, dumping, that is, they sell their services at a loss; in order to “create a market” that can rapidly attain that “critical mass” that would allow for commercial profitability they must sell their products at a low price, according to the well known formula of the decoy price. Once these products find a place in the customers habits and the “need” for them is firmly established, prices inevitably rise, as always takes place in cases where a captive market is formed.14

Behind the apparent freedom of choice conceded to individuals regarding whether or not they will use these products, a veritable social contract is now taking shape. As the authors of a recent book point out,15 “understanding the opportunities offered by information processing and communications technologies has become an imperative for every individual”. This is about “opportunities”—which in theory presupposes freedom of choice—but it is imperative that one should avail oneself of them; in other words, there is no choice. Similarly, there has never been a law that would force anyone to have a bank account, a checkbook or an automobile; but anyone who wants to live today without them (except, in the case of the car, some residents of the urban centers) would expose themselves to so many inconveniences that they refuse to even try to do so, unless they deliberately want to deprive themselves of any kind of social life. The same authors also describe, in a distant and descriptive tone absolutely devoid of any critical capriciousness, the ubiquity of information technology in the lives of individuals, from their conception:

“Even before birth, the baby exists amidst computerized tools such as sonograms. After his arrival in the world, he is inscribed in the maternity registers, before inaugurating his social existence by way of a data file in the records of the civil registry. His first and last names identify him within a family and a community. Thus, he exists by way of the information that represents him. His life is marked by the data collected about him (age, gender, address, Social Security number, etc.) which are used by third parties (school, library, gym, the family doctor, travel agency, bank, etc.).”

And the fear of seeing an increase in the numbers of that “not insignificant fringe of the population that is excluded from the information revolution”—which is a notable reversal, since in reality it is the majority of the population that is labeled with this term of “fringe”—motivates “the generalization of teaching computer science in schools”, which confirms the voluntary yet compulsory character of participation in the “information revolution”. The parents or the children that do not want to submit will be considered to be anti-social and will suffer the criminal and psychiatric penalties for their obstinacy; the criminalization of “resistance to technical change” will be carried out in the name of social control and the struggle against exclusion:

“Police officers assigned to teach a class in a school in Largo (Florida) did not hesitate to handcuff a six-year-old girl who refused to watch a video about crime prevention. As the child cried, kicking and throwing her teddy bear at the television, the forces of order ‘took her into custody’ and held her for several hours in a center for youthful offenders. ‘The child had already been reprimanded for misconduct’, the school principal, whom the little girl had also spat on, explained to the American newspaper, The Tampa Tribune.” (Le Monde, April 26, 1997.)

Coercion wears the mask of humanitarian benevolence:16 the decoding of the human genome was similarly justified by the absolute humanitarian priority that gene therapies should be developed, even if they are merely, at the present time, nothing but an intellectual hypothesis. This is how a conditioning works that, by maintaining the appearance of consensus, is presented as an inevitable destiny against which it would be illusory to attempt to fight.

***

The belief in progress, the universal belief of our time, has so much influence on people’s minds that it can be manifested in the crudest forms without any reply. When the suspension of the construction of the Concorde was announced in August 2000, after a serious accident, a radio specialist on aeronautics shared his dejection with his listeners in the following terms: “For the first time in the history of humanity, we are going backward, since supersonic passenger jets will no longer be flying; we are returning to subsonic flight!” While it is obviously ridiculous to claim that there have never before been any setbacks in human history, it is no less ridiculous to think that supersonic flight constitutes in itself an instance of progress whose abandonment would in itself be a setback; and the commentary of this “specialist” is all the more inept insofar as the Concorde, which was from its very inception a commercial failure, was the only supersonic passenger jet in service in the entire world. This childish worship of progress lays claim to the motto of the Olympic Games: Citius, altius, fortius—“Faster, higher, stronger”.

There is no lack, however, of counter-examples to prove that technological innovation does not allows follow a straight line and does not always constitute “progress”, even if we only take this word in the restricted meaning of an improvement of technical efficiency, that is, if we limit ourselves to comparing the various ways that certain means are deployed for the achievement of a goal, disregarding any other consideration. We shall examine first of all the case of audio technology, and then the manufacture of books, in order to prove the falsehood of the commonly accepted idea that novelty always corresponds with progress, and in order to provide a detailed description of the modalities of technological conditioning. The example of the CD, of course, is particularly interesting, because it shows that alleged “technical” progress is not always as irreversible as is often maintained.

The market debut, in the mid-1980s, of the compact disc was immediately presented as a great leap forward compared to the vinyl disc, called the “record” or “record album”, and almost everyone believed it—which allowed the average recording to double in price in exchange for the same amount of music, during the transition from vinyl to CDs. Here are the main arguments generally advanced in favor of the compact disc:

1. Playing a vinyl record album requires the physical contact of the disc with a tone arm equipped with a needle, which leads to a gradual erosion of the needle and of the disc; playing a compact disc, on the other hand, is carried out via an optical reader, without any physical contact between the disc and the reader, and therefore without any physical wear and tear. The compact disc therefore offers both the illusion of immateriality, which is constitutive of neotechnology, and that of the eternal preservation of the audio storage device.
2. On a thirty centimeter in diameter 33 rpm record album, it is not possible to record more than forty or fifty minutes of music (from twenty to twenty-five minutes per side) without suffering, for physical reasons, a significant deterioration in the sound quality—which, as you approach the enter of the disc, undergoes a considerable decline; on a compact disc that measures less than ten centimeters in diameter, it is, on the other hand, possible to record more than seventy minutes of music with a consistent sound quality from beginning to end.17 A smaller disc with more music, and higher quality sound: who could ask for more?

In response to the first argument, it is easy to answer that, if the record album rapidly deteriorated this is because it was improperly handled, which also applies to the compact disc, even though the latter is somewhat less fragile. And it is not the contact of the tone arm and needle with the record’s surface that causes the wear of the latter, but the use of a worn-out needle (the most frequent case) and the lack of caution in handling the record (gumming up the grooves with skin oil, scratches, warping); when handled correctly, a record album can last a long time—much longer than an audio cassette tape, for example, whose magnetic tape is irremediably altered after a few years. But even a worn record album can still be played, even at the price of a few hisses and squeaks; while a “defective” compact disc is nothing but a totally useless piece of round plastic. The compact disc either works or it does not work; there is no in-between zone. Playing it is a magical operation that leaves the listener totally powerless before the eventuality of a breakdown. And the average “lifespan” of a compact disc is a little-known fact: it is estimated at around thirty years or so at most. While it will theoretically still be possible to play, in the year 2035, the vinyl discs that contain the electric music of the late sixties—just as we can today, if we so desire, play recordings from the 1920s to the 1950s on 78s or the re-editions on 33s—it is hardly likely that the future music buffs nostalgic for electronic Goa Trance, Marseilles ragamuffin or baroque concertos will be able to satisfy their passion by playing, in that same year, the compact discs on which all these marvels were recorded. But the question of the lifespan of this new device was not at all taken into account at the time of its introduction, since most audio or audiovisual products that have been marketed—this was already true of vinyl discs—are manufactured to be consumed immediately and rapidly forgotten;18 the audiophiles devoted to “great music” or collectors of recordings, those who comprise a flourishing market for historical re-releases and exhaustive anthologies, are ready, for their part, to content themselves with the idea of an eternal and unusable disc, since the fascination with the alleged “purity” of the digital recording makes them totally blind to the relatively short lifespan of the compact disc.

(The problem of long-term preservation of data stored on information equipment has not yet been solved. Thus, the T.G.B.N.F.,19 if we are to believe L’Express of May 27, 1999, was engaged in the search for “a perennial data storage medium, that is, one that is strong and can be used for a long time”. It thought at first that it had discovered the answer in the Century-Disc, “a CD made of tempered glass” which is “not subject to oxidation, impervious to etching (from light), humidity, water, pollution….” The manufacturer of this miraculous product, the Digipress Corporation, “assures us that … each and every glass disc will last at least fifty years”, even though that is only half as long as one would expect a Century-Disc to last because of its name. This example demonstrates that the term “perennial”, in the current context of the uncontrolled acceleration of technical change, is no longer synonymous with “permanent” and instead designates, much more modestly, something that is “strong and can be used for a long time”; the estimated longevity (estimated on what basis?) of about fifty years—even if this is about twenty years more than an ordinary compact disc—seems to be the greatest effort that an institution devoted to data preservation can achieve today in order to extend the life of data into the future.)

The rapid replacement of the record album by the compact disc was not due only to the fact that the latter was depicted as longer-lasting. It also required a multitude of other factors, beginning with the material quality of the vinyl discs, which had become worse since the beginning of the 1980s (due to the use of materials of inferior quality, especially those derived from recycled and rolled discs), which made the compact disc seem to be a clear acoustic advance in comparison with the lousy trash that vinyl discs had become. But it was above all the cessation of sales of the latter by the two main distributors in France—FNAC and Virgin—that signed the death warrant of the vinyl disc. The choice of whether or not to buy a compact disc player therefore gave way to the obligation to do so, unless one were to choose to just stop buying audio discs altogether (which was perhaps preferable). In countries where this coercion was not so consistently imposed, the replacement of vinyl by CDs was neither as rapid nor as systematic as it was in France.

If we now proceed to the second argument in favor of the compact disc—the quality of the audio reproduction—we are compelled to verify that by ceasing to be “analog” (the vinyl disc) in order to become “digital” (the compact disc), the audio reproduction has undergone, in reality, more of a regression than an advance, by reason of the very nature of the technology employed:

“The analog method of recording and preservation consists in engraving on a vinyl disc a series of grooves and channels that a piezo-electric crystal (the “stylus” tipped with sapphire or diamond) translates into variations in electrical tension. The digital technique cuts every second of the audio signal into 44,000 equal parts (48,000 for professional sound equipment) and codifies the amplitude of each part in a binary form…. The digital method thus replaces the continuous sound signal with one signal and degrees of scale…. At high frequencies … this division translates into a clear loss of information that the diverse ‘algorithms of the reader’ built into the laser can only imperfectly correct…. Even the least qualified audio enthusiast, listening ‘with eyes closed’ to two identical and synchronized recordings, one digital and the other analog, will therefore note the difference: from the one recording, a brilliant, harsh, somewhat disembodied sound, punctuated by gaps of silence; from the other recording, a full-bodied sound, colored, and the inevitable hiss due to the contact of the stylus with the disc. In short, the compact disc makes less music than the vinyl disc, or more precisely, it makes a different kind of music, more aseptic and more ‘proper’.” (Nicolas Witkowski, “Disque compact: le son sans la musique”, in L’Etat des sciences et des techniques, La Découverte, 1991.)

A supplementary proof of the inferiority of the compact disc with regard to audio reproduction is provided by the fact that vinyl records have made a strong comeback over the last few years as luxury products manufactured in limited editions (they are much more expensive than compact discs), and are later released in regular pressings. It is above all in genres like rock, hip-hop and techno, and especially among professionals—musicians, DJs—among whom the cult of vinyl is most often encountered; some music magazines that specialize in these genres only advertise vinyl discs in their music sections. The limitations of the audio quality of compact discs played at very high volume—according to the custom of DJs—has an immediate impact on your ears, so to speak, and makes the superiority of vinyl discs unmistakably apparent; not to mention the fact that most of the manipulations of the sound textures at which the “mixers” are so skilled can only be effected by actually handling vinyl discs.

It is paradoxical that the musical genres that are most closely linked to technology—because they are entirely dependent on electricity and electronics—should be the same ones in which the resistance to the compact disc is strongest. This shows that a certain acoustic discernment might be possible, even in the music that is generally based on the automatic repetition of melodic and rhythmic phrases, which rapidly produce in the listener a sensation of hypnosis or, at the other extreme, nervousness. But the importance granted to physical manipulation of the disc also reveals the aspiration for an individual re-appropriation—one could even call it an almost artisanal impulse—of musical practice in an audio universe in which individuals only use electronic apparatuses (beat boxes, sequencers, “samplers”, synthesizers, various programs…). The vinyl record is transformed into the object, in this new use, of a diversion from its original function by musical “amateurs”: the recorded audio signal, once it has been manipulated (sped up, slowed down, played backwards…) becomes a source of “raw” materials destined to be re-used as the elements for a collage.

(The collage, which preceded, in music as well as in literature or the visual arts, the eruption of neotechnology, constitutes precisely the privileged form of expression of the latter. What was up until only a few decades ago a marginal, provocative and elitist approach to audio, textual or graphic materials—the assembling of pre-existing materials in such a way as to disdain the linearity of the work and the idea of artistic originality—has today become the rule, a procedure of the most tedious banality.)

Returning to the compact disc, it is now clear that the progress represented by the introduction of this new device is revealed to be increasingly debatable. And this is not just due to the comparison with the vinyl disc; other more advantageous digital technologies, some of which were invented before the CD, such as the DAT (digital audio tape), have been discarded in its favor. If the compact disc has been a public success despite its inferiority to the DAT, it is because it can easily be copied, unlike the DAT, which contains “a serial audio reproduction control system that prevents the owners of the DAT from making more than one digital copy”.20 Subsequently, the only digital recording system that would enjoy public success was the Recordable Compact Disc (CD-R), “marketed without incorporating an anti-copying system”. The market success of this product is not due, in any case, to its technical superiority, since “the technology of the CD-R is … inferior to that of the DAT in various respects, particularly in regard to the quantity of data that can be stored and the number of successive recordings that can be made on the same disc”.

Once again, therefore, a neotechnological product is imposed instead of another that was nonetheless better, technically speaking—just as the Concorde, no matter how supersonic it might be, was never going to be successful against subsonic passenger jet planes, no matter how slow they are. The reason for this is that technical efficacy is not the only factor that is taken into account in the adoption or the rejection of a product or a technology; other considerations, of an economic, social or cultural order, always enter into play. In the case of the DAT, the two most important elements of the neotechnological ideology—operational effectiveness and the obsession with control—entered into conflict, and the “final consumers” who did not want to buy a device that included an anti-pirate system, preferred the compact disc, technically inferior but less demanding. Thus, it is not always the “best” technics that has the last word.

The market debut, some fifty years ago, of electric musical instruments (guitars, bass, keyboards) and then, more recently, of electronic instruments, has “democratized”—to use the fashionable term—musical practice, so that you no longer need a big studio to be a musician. For it is possible to produce songs rather quickly by obtaining an electric guitar or a synthesizer, whereas it is impossible to really play a clarinet or a violin unless you have previously undergone the requisite training. Certain contemporary musical genres (punk, house, techno, ambient…) have thus been created for people who define themselves, like the famous recording artist Brian Eno, as “non-musicians”. “There is no need to know how to play instruments”, the magazine Technikart informs us (September 1998): “A keyboard, a little audacity, and hop.” Thus we behold the appearance of “the solitary artist, who experiments alone in his home studio” (hence the term, house music—“music made at home”), who, like a craftsman in his workshop, “makes ingenious discoveries” and “manufactures home-made discs”. There is, of course, a lot of dross, and “the era of the hyper-flow, of the mix, remixes and sampling” abounds with “trial and error, opportunism, branchouilleries,21 mediocrity and false leads”. But after all, the same thing is true of the more academic musical genres (“classical” and “contemporary” music, jazz, FM rock …) and of the visual arts, where people take themselves so seriously.

Similarly, word processing software “democratizes” writing: one no longer needs to be conversant with spelling and grammar; the spell-check and grammar software can handle that. Do you want to translate a quotation, but you do not understand English, Italian or German? No problem, there is online translation software that will do it for you.22 You can print a text without knowing anything about typography, thanks to the pre-formatted “style pages”, and your book will look like it was professionally printed. But there is still the problem of content. This was, up until a few years ago, a major preoccupation for authors; but today, if you lack inspiration, you will undoubtedly discover on the Internet everything you need: then, by assembling these textual fragments, you will surely be able, with a little cleverness, to fabricate “home-made books”. Piloting your home studio with your computer, you will be able to become a writer and a musician, even a graphic artist, without leaving your house, with just one machine, you will be able to disseminate your creativity to the five continents by uploading it all on the Internet.

The tocsin has sounded for all the intermediaries, now considered as useless obstacles standing between the “author” and the “public”. Recently, the most popular author in the world, Stephen King, has announced that the first chapter of his next book will be uploaded directly onto the Internet; people will be able to read it for free and, if they want to read the rest of the book, they can pay the author; if he thinks he has obtained enough money, he will write the subsequent chapters. King wants to show that the editors and booksellers serve no purpose, and that everything would be much simpler if the authors and the public were to communicate directly with one another through the intermediary of the Net. This merely overlooks the fact that if he can allow himself the luxury of direct communication with his readers without the mediation of an editor with a good chance of being heard, it is precisely because he already enjoys considerable fame. And, if you want to try to imitate him, you will have to abide by the slogan of Maurice G. Dantec, in the “home-made” style that is fitting for this kind of proposition:

“The writer of the 21st century, if he wants to survive, and attain a certain level of readership amidst the continuous noise of the new media, will have to learn how to transform himself into an electronic icon, a pop icon, he is no longer anything but the changing commercial trademark of a totality of perfectly defined social representations, pre-calibrated for the tele-totalitarian marketing field.” (Le Théátre des opérations: journal métaphysique et polémique, Gallimard, 2000.)

Two different kinds of authors are trying to become independent of the tutelage of editors, who in their view have become mere obstacles: those who sell many books, like Stephen King, and those who cannot find a publisher, such as, for example, researchers in the social sciences, for whom the American historian Robert Darnton sees practically no other solution than “desktop publishing”.23 Logically—since he takes it for granted that publishers do not want to publish research works in the human sciences—Darnton has to encourage self-publishing via the Internet. But he is very much aware of the fact that:

“In order to become a book, a thesis must be revised, sometimes made easier to read, sometimes improved, adapted to the needs of the layman and re-written from A to Z, preferably with the participation of an experienced editor. Editors often refer to this as ‘added value’. And this is only one part of the value that enters into the production of a book. Proofreading, pagination, composition, printing, marketing and advertising: all kinds of technical skills are necessary to transform a thesis into a monograph.”

What he says about the thesis is also applicable more generally to all “typewritten manuscripts” contributed by authors. By bypassing the publisher, one bypasses all those “technical skills” without which a book is not really a book, but just a mass of signs piled up one after another on the pages. If editorial work is indispensable and if the researchers cannot find an editor, the problem could very well be insoluble. Instead of confronting this problem, however, for the purpose of eventually proposing realistic solutions, Darnton does the same thing as all the other apologists for neotechnology; he gives free rein to all the illusions of hypertextual encyclopedism:

“Far from simplifying this process, desktop publishing will add new complications, but could very well yield a result of a considerably enhanced value. A computer-based thesis could contain almost unlimited appendices and databases. It could be linked to other publications in such a way as to allow the readers to pursue new leads by way of older materials. And, once the technical problems have been ironed out,24 one will be able to ensure economical production and distribution by reducing the publishing costs and making more room on the bookshelves of the libraries. The problems of desktop publishing of this kind are naturally considerable. The start-up costs are high, which is why the publishers begin by arranging for Internet availability and hyperlinks, as well as training or recruiting technical personnel.”

Thus, not only will the problem that he is trying to solve remain the same, but Darnton even proposes that the publishers should spend a lot of money on veritable “research and development” laboratories in order to publish books that, precisely, they do not want to publish in the first place. Here we have once again an example of the mental confusion referred to above [in the preceding chapter] displayed by “researchers” whenever it is a question of analyzing a concrete question (in this case, the one that most closely involves their own careers).

Meanwhile, far from realizing the Borgian daydreams of people like Darnton, the “e-book” is fighting against all odds to appear to be a … book, for the moment without great success. The electronic “paper” and the electronic “ink” are being put to the test; it cannot be doubted that this electronic book will be to the book what “artificial intelligence” is to intelligence: a substitute that fools no one. In any event, once again, as in the case of the Internet or the cell phone, people are prophesying all about the extraordinary augmentation of freedom that will be conferred by this “e-book”, which will in fact amount to nothing more than yet another reduction of autonomy. To read a text, you will have to be connected to the Internet first, and pay with your credit card (maybe it will be free at first, but not for long); then, in order to possess a copy of the book—which is what many people do with books, not only out of fetishism, but also to read them or consult them again—you will have to print it (there are now photocopiers that print “books on demand”, that is, in their entirety), which will obviously entail even more expense. What would be the purpose of having a system that will not be less expensive than real books and will be infinitely less practical to use (have you tried to “leaf through” a digital book, with the delay between pages, even on a “high-speed” network connection?), with the risk in the first place—and not a small one—of seeing the text that you consulted yesterday deleted or unexpectedly modified, and intruders (advertisers or other third parties) who collect data on all the texts downloaded by a reader in his e-book file?25 But if you ask yourself why it is so urgent to introduce the “e-book”, is it not simply because the opportunity to read a book at home, on the street, in a park or wherever, without being connected to the Net and without participating in the “collective intelligence” of the Network, is archaic behavior, an instance of the “resistance to technical change” that must be fought as soon as possible?

But let us leave aside the speculations about the future of the book, electronic or not, and let us take a look at the current situation, observing more concretely how the manufacture of books actually takes place. Neotechnology has now led to an important reshuffling of the roles of the author, the editor and the printer, and to a generalized de-skilling.

Since word processing software has become so widely available, a publisher will no longer accept a handwritten or typed text (unless it comes from some prestigious old geezer): texts must be “formatted” by the author himself. The typographical workers who in times past ensured the composition of texts, that is, their transition from manuscript to printed form, have disappeared as rapidly as the practice of writing by hand is also tending to disappear in favor of composition directly on the computer screen. The publisher thus economizes on the costs of composition of the text. The author submits his text on a disc,26 and these days submission directly via the Internet is beginning to become widespread. An author who possesses neither a word processor nor access to the Internet is now considered to be a dinosaur.

Responsibility for proofreading the text now falls to the author himself, excluding any other person. Specialist proofreaders are tending to disappear and are being replaced by spell-checker software (especially in the press). But these programs, even if they are well designed, are absolutely inadequate for obtaining a text that is purged, as far as possible, of all errors in spelling, syntax or typography. At least two rounds of proofreading by a professional proofreader are necessary—most authors are unacquainted with typographical norms and, quite frequently, French spelling and syntax—in order to obtain an acceptable text. Because, however, the costs of proofreading are increasingly considered to be an unnecessary expense, the two rounds of proofing that used to be traditional are often reduced to just one. So we should not be surprised to see the innumerable errors, mistakes and gaffs that mar the great majority of books published these days.

The pagination of the text was in the past shared by the publisher, who defined the layout of the book, and the printer, who paid the specialist typographer whose job consisted in making sure that every book conformed to the layout specified by the editor. Today, this, too, is “internalized” by the publisher, who delegates the execution of this task to relatively unskilled personnel, or even to student interns, often under the authority of “artistic directors”, and this entire little world is equipped with expensive computer-assisted publishing machinery. Since the text is stored on a disc, why paginate the book externally when it can be done on the disc? The problem is that typography is an art that presupposes—like all arts—the mastery of a technique that is based on rules that were once well known by traditional printers, but which are today almost entirely unfamiliar to those who work in the publishing industry. A large number of “artistic directors” do not even know the rudiments of typography; they think that graphic capriciousness is a virtue, whereas proper pagination must necessarily take into account an entire set of conventions born from experience concerning the optical limitations of reading. But how can people who have never opened a book even suspect that such limitations exist, or that typographical conventions are anything but incomprehensible foolishness?

“Reading comfort is the main principle of all typography. However, only someone who is really familiarized with reading can judge readability…. The real cause of so many defects in books and other printed matter is the lack of tradition, or the outright abandonment of tradition, and the presumptuous scorn for conventions. If we can read a text easily, it is because our customs are respected. To know how to read presupposes conventions and knowledge of them. Anyone who throws these conventions overboard runs the risk of transforming the text into something unreadable.” (Jan Tschichold, Livre et typographie: essays choisis, Allia, 1994.)

In order to save money, some publishers do not just require authors to submit a disc, but also what they call a camera-ready copy; in other words, they are asking the authors to do the pagination themselves. All the publisher has to do is send the pages that have already been printed by the author (printed with a laser printer) to the “printer”, and the “printer” merely has to reproduce the desired number of copies. The authors, who we may assume are not acquainted with the subtleties of the typographical art, must format their texts on their word processors by following the vague instructions provided by the publisher; and since the latter does not really have a very clear idea of what good typography looks like, either, the final result is the exact reflection of the means employed.

Therefore, the publishing industry these days is ever less distinguishable from the photocopy stores that are so plentiful around schools, while an increasing share of the manufacture of the book falls upon the author. The latter, even if his book is published by a professional publisher, does a job that is increasingly similar to that of self-publishing; in reality, it is above all the cost represented by marketing the book, advertising it and distributing it, which makes the publishers continue to be indispensable. And it is certainly true that because of the de-professionalization and de-skilling that is in fact taking place, the “chain of the book” is becoming increasingly more fragile, implying an ongoing decline in the quality of the books that are produced and, as a result, a diminution of the demands of the reader, who is so accustomed to reading poorly-edited texts that he will finally end up asking himself if it is even worth the trouble to buy books, and whether he might as well just connect to the Internet to print out the pages that interest him.

We could very well say, without too much exaggeration, that most of the books produced today are products of authors who do not know how to write, translators who do not know how to translate, editors who do not know how to edit, and printers who do not know how to print. The existence of word processing software and computer-assisted printing makes it possible for incompetents (authors, graphic designers) to assume responsibility for tasks that were performed in other times by highly skilled professionals. The ideology inherent to neotechnology makes it possible for this kind of regression—for which we could provide examples in practically every profession—to pass for progress. Faced with this demagogic mystification that consists in making everyone believe that they can be transformed overnight into a Pico de la Mirandola thanks to computer technology, the simple and banal reality, that no one can become a proofreader, a translator or a typographer without a long apprenticeship, can make no headway at all.27 Who needs proofreaders, translators or typographers? They are people who add value to the labor of others, which is obviously not at all gratifying for our era of exacerbated narcissism. Considered as mere auxiliaries without any prestige or interest, they can be advantageously replaced by proofreading machines, translation machines, and pagination machines. The result, with regard to translation, is exemplified by the following (an exact reproduction of an email sent to Le Monde, which published it on June 4, 1999):

“I am a writer congratulating the persons of Paris for a marvelous experience. Pardon the excuse my Frenchmen. I am to use a computer program to automatically translate my English. I am even my womb English of the text, in case where the translation program completely strikes it. I have never been in France and was told by every other American that the French were too rude in America. Monday, my son and I rode our bicycles from Paris to Versailles and spent half a day that ended by validating Mel Brooks’ affirmation: “It’s good to be of the king!”…. I have received trapped in the traffic and it was swept in the left rapid alleys of the boulevard of president Kennedy. It is surprising you can pedal as rapid when adrenaline boot in, but a swarm of moving rapid the cars were at the point of beating me. I can make everything was pedaling, left me in my departure and waited to win some blow that would want audience my outside this life. But instead of a hit I expected that the ‘dweedle-dweedle’ of a car horn. It was not an obnoxious blare, rather the courteous beep accelerated I knew a bicyclists around the world as a gentle greeting. France is a much more courteous place on bicycle than the my state of Utah. I was applauded many more times than day as I rided rapidly around Paris. My last day pedaling in France, Thursday, my son and I have taken the train to Rambouillette, where we rode 100 km in your beautiful country roads. The roads were so peaceful like velvet. The mustard was spread and we ate a 15-franc sandwich superior to the one that I had spent ten times than the much of large of the street of the Louvre Museum. I am dealt my American friends that if the French are rude with them, he probably deserved it. Excuse us for our arrogance these days, I desire had a way for my to lead to America a good dose of kindnesses and of love for life. And thanks again.”

Now that machines have been so effectively put to work to replace the translators who are now rendered superfluous, the latter can spend their free time devoting themselves to activities that will expand the horizons of their egos, like the one that is now so popular on the Web, cited in the specialist media as an example of a “cutting edge” and positive attitude, which consists in videotaping oneself twenty-four hours a day with the help of a webcam and broadcasting it in real time over the Internet.

Such conditions are evidently not conducive to the attentive reading of books, except for the kinds of books that one reads on the commuter trains with a Walkman stuck in your ear, in between two tasks that are more worthy of interest. To really read, you need the feeling of having some time ahead of you, and above all the conviction that this activity contributes something to your life. Decades of the “politics of reading” have valorized reading as leisure,28 as if reading books was an end in itself, so that now one no longer reads in order to acquire a better understanding of the world or to “orient oneself in thought”. Other, much more gratifying forms of leisure are within reach, which do not offer, like the book, the disturbing sensation of confronting oneself, obliged to think, if possibly in a quiet place, far from the gaze of the other and therefore almost dead.

(Chapter 3 of Jean-Marc Mandosio’s book, Aprés l’effondrement. Notes sur l’utopie néotechnologique, Encyclopédie des Nuisances, 2000. Translated in February-March 2014 from the Spanish translation that was first published in the first issue of the journal, Los Amigos de Ludd, December 2001, under the title, “El condicionamiento neotecnológico”.)

The Spanish translation of this text may be viewed online (as of February 2014) at: http://www.ecologistasalcalah.org/docs/curso/el_condicionamiento_neotecnologico.htm

  • 1 Chapter 3 of Aprés l’effondrement. Notes sur l’utopie néotechnologique, Encyclopédie des Nuisances, 2000.
  • 2 In the first part of the book, Mandosio defines what he means by neotechnology in the following manner: “1) An economic and technical system, that of the ‘new communications technologies’, with its production process, its infrastructures (the ‘information highway’), its equipment (micro-processors, programs…) and its markets (the targeted public, that is, everyone); 2) the inseparable ideology of this system, which preceded it, engendered it, and feeds on its further development. As an ideology, neotechnology makes its technologies conceivable, and then assimilable: it paves the way for its reception through the production of philosophical, economic and journalistic discourses; as an economic and technical system, it in turn confirms the relevance of these discourses and forces them to adapt in order to ‘harmonize’ with its development, which is never totally anticipated. Neotechnology, under these two aspects, constitutes a process of self-validation that functions in a closed circuit, which makes it similar to a totalitarian ideology or a religion.”
  • 3 This best-seller, first published 20 years ago (1980), made no small contribution to disseminating this worldview, even though Toffler was not the first to formulate it.
  • 4 Forecasting and Assessment in the Field of Science and Technology.
  • 5 The two others were “Work and Employment—a major problem of the 1980s” and “Biosociety—biotechnologies as a major driver of change in the next thirty years”.
  • 6 The idea, furthermore, is not new. One may read in a document of the Rockefeller Foundation, written in 1944 (cited by Horkheimer and Adorno in Dialectic of Enlightenment): “the supreme question that our generation must face today—the question in relation to which all the other questions are nothing but corollaries—is that of the control of technology…. No one knows exactly how to achieve this result.”
  • 7 See, for example, Bertrand Leclair (L’Industrie de la consolation: la littérature face au “cerveau global”, Verticales, 1998), who warns the reader that “this essay is not aimed at the Internet or CD-Roms, which are effective tools, with wide-ranging and passionate applications according to some people, but at the propaganda which in fact precedes them, and the propaganda that is undertaken to cause them to be accepted. In brief, what is to be exclusively addressed in the pages that follow is the ideology through which the stunning development of these technical innovations is filtered and by means of which they are in turn amplified (and which, in this sense, they can reveal).” Not to see that technology—old and new—is itself an ideology, is to completely miss the point of the issue.
  • 8 Très Grande Bibliothèque nationale de France. [Note of the Spanish translator.]
  • 9 This judgment is based on the idea that “the generalization of access to the Internet will lead to a situation where the basic data that are not accessible on the websites open to the public lose part of their scientific value”. An interesting discovery, considered in relation with the bibliometric definition of what constitutes the nature of science elaborated in the preceding chapter: what is accessible to everyone has more scientific value than what is not. One sees the intimate relation that unites a senator’s view of science and the epistemological relativism that is all the rage these days.
  • 10 No one, however, wrote love letters to the robotic woman of Metropolis or the wife of Frankenstein: the disturbing seduction exercised by these creatures was still mixed with repulsion.
  • 11 The Internet is a totally decentralized network whose ancestor, Arpanet, was conceived for Pentagon information storage and communication in such a way that it could not be entirely disabled, even in case of nuclear attack.
  • 12 “Half of all Internet surfers access the network through a business or school connection, living in the primitive utopia of a free Internet.” (Alain Le Diberder, Histoire d’@: l’abécédaire du cyber, La Découverte, 2000.)
  • 13 Along the same lines, American preschoolers are forced to watch televised “educational programs” that are offered for free to school districts, but which include advertising that the schools are not permitted to skip.
  • 14 Telephone providers are beginning to demand that some customers pay a “1,500 franc activation fee, whereas in the past activating a phone connection was simple”; in other words, “it is no longer the customer who chooses the provider, but the provider who chooses the customer” (Libération, August 25, 2000).
  • 15 Solange Gehmaouti-Hélie and Arnaud Dufour, De l'ordinateur à la société d'information, P.U.F., 1999.
  • 16 France Telecom has just announced (late August, 2000) that it intends to market, beginning in September 2001, a wristwatch for children in which an integrated Internet connection is installed as well as a locator system similar to that in the cell phone, whose main purpose will be to allow the parents to remotely monitor the movements of their children. This corporation that, as it claims, “will make us love the year 2000”, is now planning, as openly as possible, to equip children with a spy bracelet, whose only difference from those worn by certain offenders sentenced to “house arrest” will be that it will be fun and interactive.
  • 17 Even though many compact discs (singles) only contain two or three songs, with a bonus remix.
  • 18 An American firm has even recently released a disposable digital video disc (DVD), “programmed to self-destruct after a certain amount of time” (Transfert, March 2000). “The disc is coated with an ultra-fine chemical layer … that begins to degrade after the first exposure to the laser-reader. After a few minutes or several days, depending on the thickness of the chemical layer, the DVD is no longer readable.”
  • 19 See footnote 7 above.
  • 20 It could very well be pointed out in response that “low quality” compact disc players emit a perceptible hum when played at low volume.
  • 21 An argot term derived from electronics (to plug in, to be wired), which in this context means to be fashionable.
  • 22 These software programs can in all seriousness be characterized as programs for the automatic production of surrealist texts. The translation software of Alta-Vista, one of the most popular Internet search engines—translates the English word, “hair-dryer” with the disturbing formula, “dessicator of hair”.
  • 23 Robert Darnton, “Le nouvel áge du livre”, Le Débat, May 1999.
  • 24 In all the “prospective” discourses of this kind, technical problems are dismissed with the stroke of the pen, in conformance with the neotechnological postulate that everything that is imaginable is immediately realizable.
  • 25 The computerized catalog of the T.G.B.N.F., for example, contains in its memory the records of all the requests for books made by each reader, the dates of his visits to the website, etc. We should recall that, in the same way, all the sites visited by web surfers and all the phone calls made or received on the cell phone or home phone are recorded; in order not to be identified you have to go to a cybercafé or an old-fashioned phone booth.
  • 26 This almost inevitably gives rise to tragicomic episodes where the files are lost, the chapters that have already been corrected and paginated are unfortunately “replaced” by older versions that have not been corrected or paginated, etc.
  • 27 “For those who have acquired experience in any art, correctly judge the productions of that art, understanding by what means and how the perfection of the work is achieved, and they know which elements of the work harmonize with each other.” (Aristotle, The Nicomachean Ethics.)
  • 28 With pathetic arguments to attract “the youth” to books, such as this one: “A library where you can sniff around the books before choosing one, that is absolute zapping.” (François Nourissier, quoted by Jean Tibéri, the mayor of Paris.)

Comments

Chapter 4 (Excerpts): The end of the human race?

Submitted by Alias Recluse on March 2, 2014

Chapter 4 (Excerpts)

The End of the Human Race?1 —Jean-Marc Mandosio

Amidst the general destruction of all the conditions that might (eventually) allow the individuals who comprise humanity to finally have access to a life worth living, neotechnology2 is the vector and accelerator of a four-fold collapse: 1) of time, or duration, to the benefit of a perpetual present; 2) of space, to the benefit of an illusion of ubiquity; 3) of reason, which is confused with calculation; 4) of the very idea of humanity itself.

None of these collapses is exclusively imputable to neotechnology, which only implements the promises of the technological era. We shall take a closer look at how it does so in the following pages.

“Live for the moment”: the message that the Coca-Cola corporation installed, in luminous letters, on all the soda machines in the Paris Metro stations, is truly the imperative of our time. It is also a literal translation (undoubtedly unintentional) of the “carpe diem” of Horace, the classical reference par excellence, evoking a time when students, “nourished on Greek and Latin, starved”; but what was originally advice offered by an Epicurean to rich Roman businessmen and men of letters has been transformed into a veiled sadistic threat: how could the pallid living dead who drag themselves through the corridors of the Metro in the middle of August “live” in that or any other way? The only thing that is expected from them is an urge to spend money. This slogan perfectly summarizes the spirit of an era in which the worn out slaves of hypermodernity go from fear—for example, when driving the wrong way on a highway exit ramp—to the search for the ecstatic crash when they will finally feel like they really exist. The proliferation of paroxystic states, of “risky” behaviors, from gangbanging to puenting, from shooting heroine or smoking crack to staying awake for days at a time thanks to amphetamines, is the application of the famous subjectivist slogan: “Live without dead time, enjoy without restraint.”

“Live for the moment” is also to immerse oneself in the flow of instantaneous communication, in “real time”, by the mediation of interconnected computers. Anything that is not a part of this permanent happening, where “chat rooms” follow “personal reality shows” with continuous feeds, is null and without value. Now that they are all “interactive”, the spectators are invited to take pleasure in their own alienation. (Hence the slogan of a recent anti-television campaign: “Become the actors of your own life”). The ideology of New Age—which owes its success, just like Christianity and other oriental religions, to its valorization of acquiescence as “self-realization”—says nothing else:

“Millennia are nothing but the products of human imagination; the world only exists in the present—today’s present moment, the image of eternity—as the common universe that we must effectively inhabit, that is, share and love in order to make it our own.”3

This allegedly real time is not time but its absence, its reduction to quasi-immediacy. What is thus falsely called time is everything that is the opposite of duration, of that time that Kant called “the form of internal meaning, that is, the intuition of ourselves and of our internal state”. It is instead the result of that struggle against duration, against human time, that constitutes the characteristic trait of industrial societies, where everything that takes place if even for a short time is by definition a waste of time. Since time is nothing but money, as everyone knows, profitability imposes the law of accelerated turnover: in dining (fast food), in travelling (high speed journeys), in communication (transmission of large amounts of data in a short time), etc. On the other side of the ledger, the prolongation of “leisure time”—that is, the intervals devoted to spending the money that we have been able to earn as fast as possible—will be devoted to immersion, for as long as possible, in “real time” communication, which means never leaving the circle of technological conditioning (and therefore the conditioning of the world of the commodity, since neotechnology is, as we pointed out at the beginning of this chapter, a system that is both technical and economic).

The collapse of time is obviously accompanied by that of memory. From the perspective of real time, a year is a century. You have to avail yourself of the services of a professional historian in order to discover what the world was like six months ago, and the world of twenty years ago is lost in the mists of a semi-legendary past:

“A Little Larousse from 1979 is thus the only testimony of a past era, a technical Middle Ages that is disturbingly close to us, where there were telephone booths, typewriters and televisions without remotes whose programming ended at eleven p.m.” (Alain Le Diberder, Histoire d'@. L'abécédaire du cyber, La Découverte, 2000).

There is still, however, a domain where brevity is still viewed as an inconvenience rather than a blessing: the human lifespan. Death is no longer the natural conclusion of life, but a scandal, an attack on what is supposed to be some sort of human “right” to live as long as possible. Any random imbecile—in this case, a Danny Hillis, specialist in “artificial intelligence” and one of the founders of the Thinking Machines Corporation—can enthusiastically declare: “I value my body, like everyone else, but if a silicone body would allow me to live to be two hundred, I’m all for it.”4

It is true that humanity has always cherished the dream of the elixir of eternal youth. Now that the average lifespan of certain categories of the world’s population has significantly increased,5 can we say that these people who survive so much longer than their predecessors really live, if we do not content ourselves with thinking, along with the biologists, that it is enough for metabolic functions to continue to function in order to affirm that an organism “lives”? There was a time when one could say, with Aristotle, that one can only judge the life of an individual after his death, “… in a complete life … one swallow does not make a summer, nor does one day; and so too one day, or a short time, does not make a man blessed and happy”; but can we judge a life that has been entirely devoted to “living for the moment” in any other way than to declare that it is worthless? What kind of life experience will all these nonagenarians or centenarians who will be exhibited on their birthdays bequeath to their descendants (if they have any) or to posterity?

A laboratory experiment concerning life extension has recently been conducted using transgenic mice. Its results were published in the November 1999 issue of the journal, Nature:

“… for the first time in a mammal, a gene known as p66 has been shown to be directly implicated in the aging process. A recent theory that attempts to account for the aging process attributes a role to oxidative stress, that is, cellular damage caused by free radicals, toxic molecules derived from oxygen. Enrica Migliaccio and her team at first wanted to study the role of p66 in the response to oxidative stress: the researchers then noted that the p66 protein had undergone a change. In order to find out more, they bred transgenic mice, known as 'knock-out' mice, in which the p66 gene was rendered inactive. Then they studied the action of agents capable of causing damage to DNA, via oxidative stress (ultraviolet radiation and oxygenated water) on the cells of these mice. The result was surprising: while the cells of the normal mice died in the presence of the oxygenated water, the mouse cells that did not express the p66 gene survived. This protective effect was also demonstrated in vivo. Because resistance to external stressors is generally related to an increase in lifespan, the researchers wanted to know what effect the mutation had on the longevity of their mice. The result was spectacular: the mutant mice lived an average of 30% longer than the normal mice…. The mutation of p66 does not appear to have serious biological consequences…. The researchers suggest that p66 exercises, under normal conditions, an inhibitory effect on DNA repair mechanisms. The mutation of the p66 gene allows the cells to permanently repair their DNA, and the mice to live longer.” (Enrica Migliaccio et al., “The p66shc adaptor protein controls oxidative stress response and life span in mammals”, Nature 402, pp. 309-313 (18 November 1999).)

The newspapers retained nothing from all of this except the report of the “exceptional longevity” (Le Figaro) of these mice that “live longer” (Le Monde), and live “a long, disease-free life” (Libération). But two other aspects of this research strike us as much more important:

1. The research involved not just longevity, but also “resistance to stress”—in other words, the habituation to harmful phenomena. Let’s translate what we have just related about the mice to the human species. Most human beings adapt quite readily, even to the worst environmental conditions (you only need to read Primo Levi’s If This Is a Man in order to be convinced of this). Normally, our resistance is relatively strong, because we have become accustomed—the process is known as “Mithradatization”6 —to concentrations of environmental pollution that would probably kill a man from the 15th century in a few days were he to be suddenly exposed to them; just as we would rapidly become ill were we to be suddenly subjected to the living conditions of the 15th century. But harmful phenomena are proliferating at such an unprecedented rate that the process of Mithradatization (which, like all habituation, must take place gradually and requires a certain amount of time) is no longer effective, and the natural environment is rapidly becoming a deadly environment. Mrs. Migliaccio has found the solution: instead of attempting to change an environment that creates such “stressors” in such a way as to render it less harmful to the individual, all we need to do is to change the individual by modifying his genes to adapt him to an environment that, for that very reason, will no longer be so stressful, and therefore can no longer be defined as harmful. Transgenic man will thus be able to live 30% longer even if he is subjected to a continuous bombardment of radioactive particles in an atmosphere that is saturated with dioxins and other toxic sulfurous, nitrogenous, and organic compounds.
2. The gene in question (“p66”) appears to be totally useless, and since it only has inhibitory effects, its mutation will not have “serious biological consequences”. But in order not to acknowledge a harmful effect for what it is, thanks to the “resistance to stress”—to become habituated, for example, to the infernal racket that prevails in our cities and in all our public spaces; to discover that Pizza Hut is not so bad; not to yield to panic when trapped in a traffic jam, during a hot day, on the highway; to remain calm and cheerful after having witnessed someone commit suicide on the tracks of the Metro—presupposes the loss of the capacity for judgment and therefore of thought. Of course, this is not a “serious biological consequence”, insofar such things no not affect the smooth operations of the main organs responsible for assuring metabolic functions, but there can be no doubt that they represent an important psychological consequence. Experiments on mice are apparently unlikely to lead to such a conclusion; but human beings, unlike mice, supposedly think. Since the loss of the ability to think for oneself is already clearly widespread among the greater part of our contemporaries, we may conclude that transgenic treatments will not change much in their lives: they will not perceive any inconveniences in it, only advantages.7

We do not know if Mrs. Migliaccio read, when she was younger, the report published in 1958 by a study group of the World Health Organization on “Mental health aspects of the peaceful uses of atomic energy”. This report pointed out that “from the point of view of mental health, the most satisfactory solution for the future peaceful use of nuclear energy would be to see a generation arise that has learned to get used to a certain dose of ignorance and uncertainty”.

As each passing day proves, this new generation is already here, and Mrs. Migliaccio’s mice will contribute to the perfection of the ignorance and the uncertainty of the next generations that will come after them. More generally, research projects in genetic engineering, which are conducted with mice, fruit flies and potatoes, all tend, beyond the immediate interests of industry and its profits, towards a eugenic goal, which is the constant and increasingly less openly-avowed concern of the geneticists: to eliminate imperfections, to improve the human stock in the name of apparently indisputable goals (the eradication of disease, life extension…). We do not, however, want our lives to be prolonged by these methods, just as we would not want, for anything in the world, to live to be two hundred years old in a carcass of silicone, even if this were to be possible.

The collapse of time is intimately linked to that of space. The neutralization of distance by the reduction of the time spent on traveling and by almost instantaneous communication via the Internet engenders a false impression of ubiquity. Obviously, real distance is not abolished, but only the representation that we have of it: the subjective experience of distance undergoes, like that of duration, a kind of contraction. To put it another way, being nowhere we can have the sensation of being everywhere at once. In order for this contraction to take place, so that “real time” can be the same for everyone, everywhere on earth, certain a priori material conditions are necessary: the extension of the industrial system to all societies, control of the planet by the establishment of homogenous transportation and communications networks, the standardization of lifestyles (Chinese restaurants in Paris, pizzerias in Hawaii, McDonalds in Beijing)—with the fictitious preservation of various biological and cultural reservations. A paradox then arises: places that are relatively close to each other but which are not connected by air, highways or High Speed Train, become more distant than others that are nonetheless much farther away. The contraction of space is thus accompanied by its destructuring. This paradox, which made its debut in the 19th century with the railroads, is a powerful factor in the desertion of the unconnected zones and concentration around the main “nodes” of communication. The development of air travel and the High Speed Train has only reinforced this trend. The development of the Internet, on the other hand, tends to favor a certain kind of decentralization: we see how some people take up residence far from the cities but remain “connected”, but that is precisely what prevents them from “living in the country” and transforms the latter into the green periphery of neotechnology. The Internet thus exacerbates among those who use it the feeling that what is most distant is at the same time what is closest.

The destructuring of subjectively perceived space is also translated into the new forms of urban or suburban conditioning, where every place is transformed into a “non-place”:

“Aggressive, hard to understand, disconnected from biological rhythms, the contemporary city sometimes seems like it was designed by highly-evolved cyborgs, endowed with a perception of space and time different from that of its ordinary inhabitants…. Unlike traditional urban space, the contemporary city cannot be traversed in any sense of the word. Numerous spaces are reserved for specialized forms of circulation. One cannot just walk wherever one wants because of the multiple obstacles posed by infrastructures…. The resulting space is like a Swiss cheese, crisscrossed by pavements…. Because it cannot be spatially apprehended, the unity of the city is a synonym for a public relations campaign…. Everywhere, the same malls, everywhere a superabundance of signs that are powerless to channel the impression of the fragmentation of urban space, a potentially infinite fragmentation that is similar to a fractal process…. The same scenery seems to be reproduced from one corner of the planet to the other, as if the whole world was being prepared for the advent of a new race of cyborgs capable of understanding an urban environment that has become an enigma” (Antoine Picon, La ville territoire des cyborgs [“The City, Territory of the Cyborgs”], L’imprimeur, 1998).

The destructuring of space also entails that of subjectivity, because space is, like time, an a priori form of perception: it is not something that we perceive, but the very framework of our perceptions, the totality of coordinates within which our sensory experience is constituted (as Kant said, “[space] is the subjective condition of sensibility, under which alone outer intuition is possible for us”). In a space that is fragmented to the extreme, stripped of any point of reference and endowed with paradoxical properties, consciousness itself becomes fragmented and schizophrenic. At least in part, we may thus refer to psychogeography to explain the almost simultaneous appearance all over the world of serial killers and, more generally, aberrant and self-destructive behaviors.

The relativity of time and space that the astrophysicists talk about makes no sense—just like the paradoxical properties revealed by particle physics—except on a non-human scale of phenomena. In our everyday experience, Kant’s observation is still entirely applicable: “Space represents no property at all of any things in themselves, nor any relation of them to one another, i.e., no determination of them that attaches to objects themselves and that would remain even if one were to abstract from all subjective conditions of intuition.” In the same way, despite our knowledge that the earth spins on its axis and revolves around the sun, this does not mean that for us, as it does for Husserl, “the Earth does not move”. Finally, it is not true that “we have a potential, virtual body, capable of every kind of metamorphosis”, nor that “it is infinitely variable” (Michel Serres).

The confusion between the virtual and the real,8 the total disorientation that characterizes the schizophrenics of the post-industrial era, entails the impoverishment and sterilization of the imagination. The latter ceases to be creative—except, in principle, among the “creatives”, whose function is precisely that—and is restricted to the consumption and tedious repetition of prefabricated images.

Memory and the imagination, as they collapse, necessarily drag reason down along with them. We have already seen repeated examples of this decomposition of reason in our comments on the texts of researchers or university professors (not to mention journalists) with relation to neotechnology or other matters. The accelerated dissolution of reason in the tepid waters of inconsistent charlatanry goes hand in hand with the conviction, which is becoming ever more widespread, that reason is nothing but a simple faculty of calculation. This conviction, which has been disseminated with the generalization of information technology, is derived from an enormity attributed to the English philosopher Thomas Hobbes, which is repeated by all the specialists of “artificial intelligence”: “Reason [Thinking] is nothing but Reckoning (that is, Adding and Subtracting).” It does not take much to proceed to the conclusion that calculating machines—and that is all that computers9 are—are “intelligent”.

It is a very big mistake to confuse reason with the art of counting, simply because reason is about something altogether different. Here is how the Abbé de la Chapelle defined reason in the Encyclopedia two centuries ago:

“We can conceive different meanings of the term, reason.
1. We can simply and without further qualifications understand it to refer to that natural faculty of knowing the truth, regardless of the light in which it is understood and the order of materials to which it is applied.
2. We can understand reason to be that same faculty considered, not absolutely, but only to the extent that it is conducted in its quest by certain ideas, with which we are endowed at birth, and which are common to all men….
3. Reason is sometimes understood to mean that same natural illumination, by which the faculty that we designate by that name is led….
4. By reason we can mean the chain of truths to which the human spirit can naturally accede, without the help of illumination or faith.”

There is not the slightest trace of calculation in all of this; it is always about the truth and natural illumination. The word reason was only used in the sense of calculation in mathematics (what we now call an “account book” used to be called a livre de raison). In Latin, ratio certainly means calculation, but this is only one of the word’s meanings, as it can also mean “discourse”, “reasoning”, etc.

Also in the Encyclopedia, Diderot, inspired by Francis Bacon, divided all human knowledge into three categories, i.e., “History, which refers to Memory; Philosophy, which emanates from Reason; and Poetry, which is born from the Imagination”. In an era when these three faculties are not found in most people’s minds except in the most rudimentary form—somewhat like the dilutions of the “memory of water” in homeopathy—it is hard to admit that philosophy emanates from reason, if we understand by “philosophy” the desiring machines of Deleuze, the difference of Derrida or the disciplinary laboratory of Alunni. It is only quite recently that philosophy has become a specialized discipline (whose method and object actually remain quite obscure); in the past, as in the times of Diderot, philosophy embraced all the sciences, divided into “the science of God”, “the science of man” and “the science of nature”, and the mania for mathematical (or pseudo-mathematical) formalization did not yet exercise its tyranny over most disciplines. Only since the advent of mathematical logic—of which informatics is the direct heir—has reason been narrowly identified with calculation: in 1854, one of the founders of this discipline, George Boole (inventor of the famous “Boolean Algebra”), entitled his most important work, An Investigation of the Laws of Thought. But the “truth” that the Abbé de la Chapelle was talking about has nothing to do with the truth that mathematical logic is concerned with: in the former, it is about real knowledge, the knowledge of the nature of things, and in the latter, it is a simple formal framework, one that establishes the conditions that determine whether a logical proposition can be judged to be “true” or “false”, regardless of any external referents.

Reasoning does not consist only of a series of operations of formal logic that a correctly programmed computer carries out unerringly. The classical computers did nothing but mechanically run programs—sometimes incredibly complex programs—based on the properties of mathematical logic, without ever dealing with either the “truth” or “natural illumination”. They were no more related to reason than a plow or a toothbrush. As one author with an exquisite taste for the euphemism said: “Researchers in artificial intelligence undoubtedly employed a formalism that was too narrow and for that very reason they lacked essential concepts for the understanding of the nature of intelligence” (Dominique Pignon, Le mecanisation de l'intelligence en quete perspectives nouvelles [“The Mechanization of the Intelligence in Search of New Perspectives”]).

So, what does it mean to reason? This is not well understood—which is to say that no one has any idea—and perhaps the best definition might still be the one offered by Plato: “a dialogue of the soul with itself” (hence the dialectic, initially the art of dialogue, where thought advanced by successive affirmations and negations). The exercise of reason sets in motion not only the faculty of chaining together propositions logically, but also a faculty which does not pertain exclusively to formal logic, once that embraces the imagination, memory and sensory experience; furthermore, reason is not the attribute of an isolated individual, as the philosophers have always imagined (especially in the model of the Philosophus Autodidactus (the “autodidact philosopher”) introduced by Ibn Tofail in the 12th century), but of a human society. For that reason, even computers that are less rigidly formalized than the classical computers, called “neuronal” computers because their structure supposedly imitates that of biological neurons, and which more or less manage to simulate certain basic perceptive mechanisms (vocal or optical recognition) have—as another subtle coiner of euphemisms said—“many problems in addressing the structured representations of language and reasoning” (Daniel Memmi, Connectionism and Artificial Intelligence as Cognitive Models). And soon they will announce the introduction of “biological” computers, which associate transmitters and neurons (of the planarian, the rat, or the snail), or else replace silicon microprocessors with strands of DNA…. Perhaps these new computers will “calculate” more rapidly than the current ones, but they still will not reason; because what all these machines lack is dialectics.

No one needs to worry about the possibility that machines will ever think and make decisions in our stead.10 Insofar as computers do nothing—and will be capable of doing nothing—but execute the operations for which they have been programmed, what we have to worry about is the programs themselves, and those who design them. This kind of “delegation of power” to a system of apparatuses that are neither comprehensible nor controllable by those who use them (since this knowledge and this control—in this domain nothing has changed, whatever anyone may say, since the time of Taylorism—is exclusively reserved to engineers and technocrats), is already in itself enough reason to reject the influence of technology in general, and of neotechnology in particular, over our lives.11 Returning to reason as it was defined by the Abbé de la Chapelle, it is clear that the process of the destructuring the mind that we have seen in operation with regard to memory and imagination renders the very notion of truth literally incomprehensible. This explains, moreover, the irresistible seduction exercised in our time by deconstructionism and relativism. But we would be committing a serious error if we were to abandon the search for the truth, on the pretext that reason and enlightenment have degenerated, since the 18th century, into positivist dogmatism, according to a disastrous dialectic leading to the “self-destruction of reason”. As Theodor W. Adorno and Max Horkheimer demonstrated during the Second World War, “freedom is inseparable from enlightened thought”, even if the latter contains within itself “the seed of that regression that is evidenced everywhere in our time”. That is why, they continue, “reason must become conscious of itself”, or else “it will seal its own fate”. And, in effect, reason is going down before our very eyes, perhaps irremediably.

Curiously, the subjective dimension of this dialectic of reason was described, a long time ago, by an author who is not exactly deemed to be “a champion of enlightened thought”:

“It is to be feared that by frequently seeing the positions that we assumed to be the most solid and enduring being undermined, we shall succumb to resentful fear of reason as a result of which we will not dare to believe the most obvious truth” (Saint Augustine).

The conjoint collapse of these three faculties that were traditionally considered to have been constitutive of the human mind provides a good explanation of that fact that more and more voices are joining the chorus of those who today propose that we should do away once and for all with the species itself, from which there is not much more that we can expect and whose limitations seem from now on to be an unbearable burden or a scandalous insult against the rights of the individual. The same dialectic that led reason to create the conditions of its own destruction has ended up inverting the “humanist” progressivism of the Renaissance in a project that seeks to purely and simply destroy humanity.12

***

In this “total confusion”, it is necessary to have a fixed point from which it will be possible to issue a judgment and attempt to reorient ourselves. The only point of orientation upon which we can base ourselves is our own nature as individual humans endowed with reason, a necessary (although not sufficient) condition for all discernment. We certainly do not claim to possess the least originality in this matter. Every day, however, we see so many discourses, so many inventions, so many events of such great originality and such imposing novelty appear, that we have not judged it desirable to add any of ours.

In opposition to the imperative that all the propaganda broadcasts never cease to drum into our ears, “Live for the moment”, we proclaim another very different one, which does not require you to purchase anything to put it into practice and which is not aimed at a collective entity composed of seven billion members, but at each single individual, and which opens up the possibility of a progress worthy of the name: “Know yourself.” And we are not employing this formula here in the manner of the psychoanalysts, who use it to disorient men by means of illusory demands and distance them from action on the external world, but because the possibility of collective action on the external world henceforth proceeds by way of the recognition that, over the course of one’s life, an individual is hardly capable of really acquiring and developing more than a very limited number of creative abilities or particular skills, and that what matters is to know what he is capable of if he really desires to be capable of doing what he wants to do.

Translated from the Spanish translation of portions of Chapter 4 of Jean-Marc Mandosio’s book, Aprés l’effondrement. Notes sur l’utopie néotechnologique, Encyclopédie des Nuisances, 2000.

English translation completed in March 2014.

The Spanish translation was originally published in the second issue of the journal, Maldeojo (June 2001).

The Spanish translation of this text may be viewed online (March 2014) at: http://www.network54.com/Forum/280518/message/1170936373/%BFFin+del+g%E9nero+humano-+%28Jean-Marc+Mandosio%29

  • 1 This text is a translation of portions of Chapter 4 of Jean-Marc Mandosio’s Aprés l’effondrement. Notes sur l’utopie néotechnologique, Encyclopédie des Nuisances, 2000.
  • 2 By neotechnology we mean first of all an economic and technical system, that of the “new communications technologies” (it might seem improper to define them as “new”, as is traditional, but this term is actually perfectly applicable, because incessant renewal constitutes an essential element of these technologies), with their production processes, their infrastructures (the “information highway”), their equipment (microprocessors, programs…) and their field of operations (the targeted public, that is, everyone); and secondly, an ideology that is indissociable from this system, which preceded it, engendered it and feeds off its development (this ideology crystallized at the end of the 1940s in the U.S. around the mathematical theory of communication—better known as “information theory”—elaborated by Claude Shannon and Warren Weaver in 1948, and cybernetics, “the scientific study of control and communication in the animal and the machine” formulated that same year by Norbert Wiener).
  • 3 This text is featured as a blurb for a collective work published in the year 2000 by Albin Michel/Spiritualités (From One Millennium to Another: The Great Change), with a table of contents that includes, among other “authors of renown”, Jean Baudrillard, André Comte-Sponville, Thierry Gaudin, Jacques Lacarrière and Edgar Morin. Throw in Paulo Coelho and you will have the entire spectrum of charlatans who invite us to “celebrate a new era in confidence and lucidity”.
  • 4 We could provide a multitude of examples of this kind from the book by David F. Noble, The Religion of Technology: The Divinity of Man and the Spirit of Invention, a very illuminating book that analyzes how scientists largely perceive their work through the lens of religious metaphors that always refer to overcoming (“redemption”) the concrete (the body, the inert, matter, limits, etc.) as a means of access to “paradise” (omnipotence, immateriality, eternity, ubiquity, etc.). [Note of the Spanish Translator.]
  • 5 On the other hand, other humans do not enjoy such “progress”; not to mention the fact that, in many regions of the world (Sub-Saharan Africa, Russia...), life expectancy is falling—which shows that nothing has been achieved and that no progress is irreversible.
  • 6 Legend tells of the Asian king Mithradates who consumed small doses of poison every day in order to acquire resistance to poisoning and overcome its effects should he be attacked by his enemies. [Spanish Translator’s Note.]
  • 7 As usual, however, it will only be when we have “deactivated” these genes, which are not only useless, but also harmful, in human beings at birth, after we discover that they also have some other function concerning which we were previously unaware, that the transgenic babies might be prevented from enjoying their 30% longer lifespan.
  • 8 The erasure of the boundaries between the real and the virtual as the origin of the flattening out of the imagination has been analyzed by Marc Augé in The War of Dreams and L'Impossible Voyage. The transition to “total fictionalisation”, as Augé characterizes this process, means above all the weakening, or the pure and simple abolition, of the frontiers between reality and the domain of fictive creation: now the real, in order to survive, must imitate fiction (places must take on the characteristics of images that are devouring them and finally they must be transformed into images, journalism must be conducted like fiction, politicians must become actors, etc.). To exist is now entirely a matter of being seen. Thus, hardly anyone travels in order to learn how to see again, but only to once again see the images that are identical to the images of the travel brochure that they obtained before their trip. The dissolution of the boundary between fiction and reality also implies that, from now on, only fiction can make itself heard: once it has been separated from the pole around which it once revolved, or against which it clashed and obtained its force, once the distance that generated the tension between the work, the public and the real has been eliminated, fiction is limited to repeating over and over again the same thing; the non-places that Augé analyzed in another book are precisely the expressions of this insane movement of repetition that has the entire world as a stage. [Note of the Spanish Translator.]
  • 9 This is the exact meaning of the English word, computer, and the Italian word, calcolatore; before 1956, computers were called “electronic calculators” in France.
  • 10 Reliance on a mechanism “for support in the decision-making process” does not at all mean that the “expert system” makes the decision; to make such a claim amounts to saying that the decisions that some people make on the basis of the horoscope, the I Ching or any other form of divination are actually decisions made by the stars, the tetragrams or the coffee grounds. A machine that “decides” or that “gives its verdict”, the fate that “wants things to turn out this way”, a god that “demands” or “orders”: all are swindles that are used to deceive others, and sometimes to deceive oneself.
  • 11 See, with regard to this question, the books by David F. Noble on the criteria that determined the introduction of machinery in capitalist enterprises: Progress without People: In Defense of Luddism, for example. [Note of the Spanish Translator.]
  • 12 We are not speaking here of the accidental destruction of humanity in the case of, for example, a nuclear conflict, or from the “collateral damages” caused by industrial development, but of the planned destruction of humanity.

Comments