Keywords

1 The Dawn of a New Era of Computing

A difficult goal for historians is to faithfully represent daily life of a past period. It is also particularly hard to imagine our lives with the ancestors of PC, and the struggle doubles, if you try to combine the two.

What computers actually were and were perceived in collective imaginary, before the spread of PC, is well represented in the episode of the popular TV-series “Columbo”, entitled “A Deadly State of Mind”, starring Peter Falk and George Hamilton, which first aired in 1975.

In a scene, the protagonist faces the murderer in a research data center room where they enter to talk in peace. In fact, computers are running, as you can argue from the continuous flashing of different lights, and by the movement of the magnetic tape reels, but the lab is silent and empty: nobody is performing queries on the site, as jobs are launched by users on terminals placed in other rooms or buildings, and results will be available after a possibly very long time of computation. Columbo himself learns this lesson in another episode of the series, when, while waiting for an address at the city data center in Los Angeles, he haunts the employee with continuous requests to speed up.

This last example displays very well the typical, necessarily mediated interaction between an end user and a computer, in the centralized time-sharing computer model.

When, at the turn of the seventies and eighties, the PC appeared on the world scene, it became a widespread device, potentially (and hopefully for the manufacturers) present in every home, an exciting tool for smart people, like the protagonist of the 1983 movie “War Games”. This young, skilled and promising hacker, while attempting to enter the computer of a well-known video game company, manages to reach, with his PC, a military supercomputer designed to respond to a missile attack, and plays Tic-tac-toe against the mainframe.

Individuals and (personal) computers had a brand-new interaction model: end users could finally learn to code, because programming was not only feasible, but also one of the main activity on a PC and, most of all, a great challenge (Fig. 2).

In 1990, in an interview about the digitalization of libraries, in the movie “Memory and Imagination”, Steve Jobs [1] defines the computer as “the most remarkable tool that we’ve ever come up with”. The main benefits imagined by Jobs for the immediate future were that the computer would allow people to access huge amounts of data by a click, and could also reshape the way of learning, by means of simulation.

Even Ceruzzi [2], while telling the history of modern computing, entitles his chapter “Augmenting Human Intellect, 1975–1985”, meaning that the PCs acted as an actual driver of paradigm change and innovation, in the industrial market design: “When the established computer companies saw personal computer appear, they, too, entered a period of creativity and advance”.

In just over a decade, the spread of the PC could be imagined as a restyling of the access to knowledge for a large amount of people, thanks to its small size, and affordable price. Multi-purpose, and suitable for individual use, PCs were designed to be operated directly by end users, without the mediation of computer scientists, or technicians, as the film “War Games” shows.

Therefore, the advent of the PC marked a watershed between a former, professional-only (military, academic, or corporate) use of computers, and a later diffusion to a vast and not necessarily skilled public.

2 The Individual Machine

The wide presence of the PC can be considered as an evidence of the technological democratization, a first footstep in an irresistible process of independence and freedom of the individuals: during the Seventies, technologists and computer scientists were already looking forward to such future.

As quoted by Rogers and Larsen [3], in 1976 at a convention of government officials, Ted Nelson, who authored relevant books about the link between computers and freedom [4, 5], concluded his presentation by yelling, on the notes of “2001: A Space Odyssey” theme: “Demystify computers! Computers belong to all mankind!”.

He was urging to disrupt the prevailing computer order and bringing about a conception of the computer as a personal device, accessible by everybody, included the elderly, women, children, minorities, and blue-collar workers. He was also meaning that the adversary to beat was Central Processing, under any commercial, ethical, political, and socio-economic point of view (Fig. 3).

Apart from philosophical considerations, new opportunities and advantages were evident and undeniable: users could rely on free access to digital calculation at any time, at home, each at their own pace; understand the computer science principles by learning to code; play and create video games, new media, and art (ASCII art), just to quote the most amazing ones.

The skills involved in this phase were not only hard, linked to the knowledge do-main the users were working in, but also soft, as the users could implement their own solutions to problems, even creating algorithms, or new field of research: “the more you know about computers […], the better your imagination can flow between the technicalities, can slide the parts together, can discern the shapes of what you would have these things do” [5].

In the following years, the evolution of hardware and, consequently, of software led to the design of more user-friendly devices, featured by GUI; computers could still be programmed, but common users had ready-to-use software packets available for most common tasks, with no need to program. Starting from that moment, skills needed for common users included digital literacy. This marked the beginning of what Andrew Odlyzko [6] calls the “unavoidable tradeoff between flexibility and ease of use”.

Until the diffusion of Internet access, personal computers were mostly stand-alone systems, upgraded (and updated) via local procedures: this only partially limited the possibility to share documents and programs, as floppy disks and later CDs or DVDs, due to their small size and weight, worked perfectly as a ubiquitous form of data storage and exchange, at least for users’ files.

Nonetheless, the opportunity to have immediate access to information, data, and resources changed drastically the PC, as technologists and computer scientists pointed out, in the late Nineties, by opening the quarrel about the death of the PC.

On the one hand, Donald Norman in 1998, in his “The Invisible Computer” [7], presents his visions of the future: a skeptical one about the PC, and an optimistic one as for information appliance; entering the new age of ubiquitous computing implies the use of far simpler devices than the personal computer, designed for the mass market and not for early adopters. Therefore, he underlines the importance of user-centered design, so that “the technology of the computer disappears behind the scenes into task-specific devices that maintain all the power without the difficulties”.

One the other hand, in 1999, Bill Gates, for instance, [8] clearly states that the PC is not destined to die, replaced by other devices, but to evolve, working together with other devices: the user experience and interaction will become more reliable and simpler, disregarding the complexity of the underlying technology, and, on this idea, he was ready to bet the future of his company.

The quarrel was still not over in our century, but changed partially form, due to the collapse of desktop sales: in 2011, Roger Kay [9], discussed about the role remaining to desktop PCs, compared with laptops and smart devices, which were lapping desk-top units. He had noticed that desktop PCs would survive at least for a bedrock market segment, composed by users who didn’t want to or could not switch to another device: “the anti-mobility crowd (operators of desktop pools for task workers), those who wanted the modularity of desktops (the white box builders, who buy parts opportunistically and jerry-rig systems together), a “comfort” segment, who liked a desktop for its better ergonomics (although notebook users could pony up for a docking station with spacious keyboard, large monitor, and ergo mouse), and the performance folks, who wanted a big heat envelope to house the hottest, fastest components (graphical workstation users and PC gamers)”. In 2017 the reasons of the survival of desktop PCs appear still the same: despite lack of mobility, they are still preferred to laptops and tablets as home servers, gaming systems, media centers, and for video editing, due to their higher computational resources, bigger mass storage capacity, smarter time of reaction on heavy workload, running multiple operating systems [10], and better performance of IO peripheral set (i.e., screen dimensions and resolution, refresh frequency, quality of sound).

3 Was It a Revolution?

Trying to answer this question is very challenging, mainly because it can be considered under many respects, and the different starting point can lead to very different conclusions.

From an industrial point of view, Peled [11], in 1987, after describing how pervasive is the presence of computers in industry and in research, welcomes the next revolution, which would be supported by parallel processing, and miniaturization, and would transform the computer in a “ubiquitous intellectual utility”, taking for granted that one revolution has already happened.

As for a sociological interpretation, in 1988, Pfaffenberger [12] raised the question about the mythology of personal computer: for vendors, it would have been a crucial factor to sell millions of computers to people who “in reality had very little practical need for them, and most of the machines wound up in closet”. Nonetheless, the social change implied in the spread of personal computer can’t pass in silence: the computer industry founders needed such an epic narration, that “may have been as important to an emergent industry as its technical achievements”, because “investors must construct a meaning-framework for their artifacts to create a need that did not previously exist”. According to the author, in the early personal computer era, the dynamics of social behavior were determined by the interaction among three factors: a dominant ideology represented by the centralized, administrative authority in large organizations; the adjustment strategy, typical of hackers and home users, who wanted to change their low rank, without openly refusing the underlying ideology of the system; and the reconstitution strategy, endorsed by the computer industry founders and embraced by many users, aimed to replace the existent dogmatic value system with a new one, that would encourage decentralization. In a sense, a warm, controlled revolution.

In 1992, Pool [13] gives an overview of the innovations introduced in the world of research starting from the introduction of personal computers: ranging from the greater ease of communication, to the help in carrying out experiments, to data collection, to the execution of calculations, and to the drafting of a manuscript, or a grant proposal, PCs paved the way to a revolution in teaching and disseminating science.

Six years later, in 1998, one more sociological reading displays the changes introduced in everyday family life both by PCs, and the Internet. Hood [14] points out the main benefits of “entrepreneurial freedom” owed to what he calls without compromise “The PC Revolution”: a better control on family finances; the growth of schooling in the previous years; a boom of home start-up businesses; a new market in the field of education; easier access to retail goods and services; a drastic cut in phone bills. In other words, a burst of energy for the American society, depending on both the private individuals’ and the government’s initiative.

Such energy is still considered as an innovation driver in a later (2005) socio-technological study by Gershenfeld [15], who takes the PC revolution for granted, and considers it as a prerequisite. The “coming revolution” will be the personal fabrication, i.e. the ability (for everybody) of designing and producing objects, with devices combining consumer electronics, and industrial tools.

Another interesting point of view to observe the spread of the PC and to evaluate its revolutionary potential, is knowledge. Beaudry [16] has used U.S. metropolitan area-level data, referred to the period 1980–2000, the PC diffusion era, to test “whether the recent patterns of PC adoption and the increase in the return to education across U.S. metropolitan areas conform to the predictions of a model of endogenous technology adoption describing such revolutions.” In other words, the authors try to verify with an econometric analysis if the diffusion of the PC, a new, skill-requiring technology, can be considered as a revolution (and not a change in the long term). If so, then some conditions that have established with their model, must be valid. The research analysis of data highlights the presence in the period considered of distinctive implications of technological revolutions, according to a neoclassical model of technology adoption. For example, data shows a greater return in the US areas where the presence of skilled workers was relevant and therefore technology spread was more rapid: this is coherent with a model of revolution and its speed of diffusion.

Under an economical respect, in 2011, Sichel [17] presents an economic framework for evaluating the aggregate impact of computers: the conclusion of his quantitative and historical analysis is that the contribution of computers (included PCs) to the American economic growth has been modest. In 2015, Berger and Frey [18] offered a very different interpretation of the impact of the PC revolution, showing “how a previously undocumented shift in the skill content of new jobs, following the Computer Revolution of the 1980s, has altered patterns of new job creation across U.S. cities”.

To conclude, we can say that the ordinary people’s perception of a revolution, finds an overall confirmation in important studies in different fields of research.

4 Extending the Historical Landscape

The introduction and spread of PCs changed everything, or, at least, determined a significant technology revolution: starting from this idea and under the influence of some of the studies mentioned above [16, 18], a small set of indexes has been identified, useful for comparatively describing the relationship with the available devices.

The goal was to point out how users’ skills, and consequently their experience, changed over the decades, before and after such a turning point.

The three indexes are the following:

  1. 1.

    Skills: a list of competences, required or recommended to operate the device.

  2. 2.

    Divide: it compares the difference between average and expert users’ skills.

  3. 3.

    Locus of control: it labels the distance between the users and the actual control on their operations. It can be local, if the user can operate the device autonomously; remote, if device operation depends on the access to a facility or a service [19].

4.1 Before the Digital

At the beginning of the 20th century, the ancestors of the PCs available to users were mechanical and later electromechanical devices, mainly intended for typewriting and accurate calculation.

The results (either documents, or calculations), unless printed or written on paper, were volatile, and could not be stored in a memory to be retrieved or shared.

The average users were therefore professionals (employees, journalists, writers, accountants, bookkeepers, engineers, scientists) in need of reliable devices, at a reasonable price.

Only a faster execution made expert users stand out, as devices were not programmable, and allowed a limited set of operations. Therefore, besides skills closely linked to a specific domain of knowledge, users really needed only literacy and numeracy (Table 1).

Table 1. Pre-digital

4.2 The Age of Gods, or Main Frame Age

In the main frame era, access to computer was limited, because of their cost and value as strategic tool; main frames were typically housed in university campuses, or corporate buildings, or in government facilities, in large inaccessible rooms, like the one in Fig. 1.

Fig. 1.
figure 1

The scene taken from “A Deadly State of Mind”

Fig. 2.
figure 2

A scene taken from “WarGames”

Fig. 3.
figure 3

The front cover of Ted Nelson’s manifesto “Computer Lib”

Like before, users were mainly professionals, but, as for their skills, they can be split into two different and not overlapping sets.

One the one hand, expert users, such as engineers, programmers, and computer scientists, needed to be both hard- and soft-skilled users, in order to design software useful for specific requirements of large institutions or businesses, in the fields of research, national security, administration, and so forth. They had direct access to the mainframe and to institutional or corporate databases.

Average users, though professionals, didn’t need specific device-related skills, but acquaintance with its interface was enough, as they could only use some predefined procedures, in order to find mostly printed answers to their time-consuming queries.

For average users, interaction with computers was quite comparable to consulting an oracle in ancient times: the question was asked to an intermediary, the answer (job result) was given to the consultant (printed on continuous forms, striped on one side, to facilitate the reading of long series of data); if the question had been for some reason not accurate or uncomplete or if a new question was suggested by results, the whole procedure had to be repeated from the beginning, because any further query required a brand-new job to be executed.

In addition to the previous skills, users therefore needed (Table 2):

Table 2. Main frame

4.3 The PC Age

The appearance of the PC in the late 1970s was noticed above all by professionals and by a category of digital enthusiasts, who could not wait to put their hands on it: the proto-nerds, already described above.

The Heroic Age.

Young proto-nerds devoted themselves to programming, typically in a garage or in a cellar, where they could work in peace, and store old cards, cables, peripherals, monitors to be assembled, without cluttering their room, and being scolded by their mother. The protagonist of the TV series “Whiz Kids”, aired in 1983–1984, perfectly embodies this figure; Richie Adler has become an expert computer user thanks to Ralf, a PC he has assembled with the equipment received from his father, a telecommunication engineer, who acquires obsolete used devices. This explains why a teenager can afford a personal computer in the early 1980s. Richie’s skills range from assembling the hardware, to programming, applied to both simple computer graphics, and robotics. Besides this mythical figure of owner, many professionals began to use a PC to have access to a modern device in an exclusive way. Their competence was mainly linked to their field of expertise and to software designed to support them: in any case, they needed to have acquaintance with both hardware and software (Table 3).

Table 3. Heroic PC age

The Human Age.

the decrease in the prices of computers and their greater usability, due to the introduction of visual interfaces and the mouse, made computers become mainstream. The metaphor of the desktop made a significant contribution to reducing learning times, determining the perception of a reduced distance between the computer and real life. Internet access led to further growth of interest for PCs, and, finally, laptops gave users the ability to take their computers with them.

Non-specialist users became the majority: for them digital literacy and numeracy (including skills like using the main features of an operating system, writing a document, calculating with a spreadsheet, surfing the internet or sending an email) were quite enough, while expert users’ skills could range from creating and querying DBMS, to designing in computer graphics, to object-oriented programming, to modeling, to neural networking and lastly to robotics. The gap between the two begins to be remarkable.

As for the locus of control, the PC is still the center of any operation (Table 4).

Table 4. Human PC age

4.4 The Post-PC Era

This change has been very effectively represented in an advertisement by Apple launched in 2017: a young girl starts his day with his tablet, then she video-chats with her friends, takes photos for a research, reads comics on the bus, and finally writes her research work, lying on the lawn in the back garden; a neighbor asks her what she is doing on her computer and the girl asks: “what’s a computer?” [20]. The post-PC era has featured everyday life.

Handheld devices, such as tablets or smartphones, have completely transformed the user’s relationship with the computer: very small size, portability, Internet connection always active, more and more simplified interaction, and voice activated functions make these devices the ideal companion for everybody, either professional or not. Even very young children can learn in few minutes taking pictures, texting (by dictating or sending a vocal message), drawing, and playing games on a smartphone.

Based on the 2017 U.S. Mobile App Report by comSCORE, app users spend 77% of their time on their top 3 app, and 96% on their top 10, meaning that the features really used are extremely limited and possibly users’ skills are limited too.

Complex software systems are not designed for mobile devices for a lot of reasons: insufficient computational power, lack of specific libraries or frameworks, usability limitations, such as the small screen size or the impossibility to display a multipart interface. However, as users do need to read a document, to check their data in a spreadsheet, or to revise a presentation while in mobility, software and storage are also offered as a service by a provider: the cloud computing is the current actual shift.

PCs (or laptops) are still in use for working in offices, for programming and for resource demanding software, but they also can run web and cloud applications.

The locus of control is becoming more and more remote, so that the users don’t even know (and they don’t need to) where their data are actually stored, or which server is performing their queries.

Skills have drastically changed and consequently the divide between average and expert users, who are required to have more and more a cross-disciplinary expertise, for dealing with increasingly complex problems (Table 5).

Table 5. Post-PC age

4.5 Continuity and Discontinuity

The PC revolution reveals strong discontinuities both with the previous and the following times. They appear even more evident by summarizing in charts the considerations made so far under the three points of view.

In Fig. 4, users’ skills are represented like heaps: as average and expert users, in pre-digital and heroic PC age, substantially coincide, they are considered as one. In addition, the incipient reduction of need for literacy, due to assistive technologies like spell checker, speech recognition and synthesis, has been represented as a partially faded bar.

Fig. 4.
figure 4

The users’ skills heap

As for average users, we can remark that the skills they do need, are mainly limited to a small number of basic competences, such as literacy and numeracy, either digital or not. Expert users, on the contrary, benefit of the huge variety of available applications.

If we consider separately the two sets of heaps, it appears that average users’ skills reached a peak during the heroic PC age, when much more skills were required to operate computers. Then we assist to the drop to a defined and limited set of skills. On the contrary, expert users’ skills continue to grow and become more and more complex, requiring also a significant technical expertise (Fig. 5).

Fig. 5.
figure 5

The average vs. expert user skills divide

Again, if we look at the locus of control, we can spot a kind of periodicity along time, as it moves from local to remote during main frame age and post-PC age, needing an always-on Internet connection. While the first shift (from pre-digital to main frame age) involved a drastic change of device type, the second one was determined by the progressive introduction of cloud computing services, such as file storage, office suites, shared repositories for collaborative projects, online code editors and compilers and many more (Fig. 6).

Fig. 6.
figure 6

The locus of control

5 Conclusions

Considerations made so far show the presence of two dynamics.

On the one hand, the heroic era of the PC in fact highlights a strong discontinuity with respect to the past, introducing an irresistibly growing gap between the skills of the average users and those of the experts. On the other hand, a continuity can be spot in the pseudo-periodical pattern evidenced as for the locus of control.

In this sense, we could think to the next step, the one following the post-PC era, as characterized by a return to local of the locus of control and a widening of the skills’ gap: maybe an enhanced Body Area Network, integrating bio-technical features?