It’s been thirty years since John Naisbitt published his landmark book, Megatrends, exploring a number of major changes in society, ten to be exact, likely to impact the way we live, work and govern ourselves. Looking back across those three decades, it’s clear that Naisbitt got a lot right and a few things wrong about the information society whose arrival he recognized.

What may be useful to us, however, are the trends he saw in the early 1980s, the variables he expected to shape and balance the resulting changes and the reasons some of those variables didn’t behave as he predicted.

Industrial to Information

First, to his credit, Naisbitt saw and made his first chapter about the rise of an information society in place of the industrial society we had been living with since the end of WWII.

Although the personal computer had only just made its appearance in the late 1970s and while many commentators were still grappling with the end of the industrial age, seeing nothing coherent on the horizon, Naisbitt recognized that information would become the critical resource and wealth in a new “information society.” He reasoned that society would need a new knowledge theory of value to replace earlier labor-based theories because, he also reckoned, we would “mass produce information” the way we used to mass produce hard goods.

He was right of course, and we are mass producing information, or at least data, in volumes that even he couldn’t have conceived. However, the fact that so many information utilities, social media and search engines most notable among them, still rely on advertising for their revenue suggests that we haven’t quite yet solidified that new theory of information value.

Searching Isn’t Always Finding

Use a modern search engine and you will see that the creation side of the information equation is charging ahead full steam. But try finding a complex or subtle item of information on the Internet and you will also see a location side still struggling. As the glut of content mounts, the need for a comprehensive and universally understood way of identifying information so that it is easily available and not buried in a gazillion search engine hits, grows with it.

shutterstock_84231898.jpg
The brick and mortar library world addressed this problem as early as 1876 when Melville Dewey came up with his decimal classification system and in 1908 when the Library of Congress adapted Cutter’s dictionary cataloging scheme. The process of assigning classifications to content was further organized with the 1967 publication of the Anglo-American Cataloging Rules or AACR.

But with the rise of automation in the 1960s, the library world panicked at the thought of those new computer people invading their sandbox and much of the momentum and progress was lost, sending the content world essentially back to square one trying to figure out what an effective cataloging scheme should look like -- working for Xerox Education Division back then, I witnessed some of this happen and it wasn’t pretty.

Complicating the process were multiple schemes, developed by different players with different perspectives, different funding, even from different parts of the world, but all convinced that they were right and none much interested in consensus.

The search engine world hasn’t helped either: after all, effective content cataloging reduces the need for their systems by providing pre-configured paths to information and if you don’t need to search the entire Internet to find information, you likely won’t see or respond to the ads that provide the lion’s share of Google’s (et al) revenue. So we spend most of our time searching through everything for tidbits of information that often exist only in a few places.

If we are, as a culture, to move past this “needle and haystack” method of locating the information, the answer, now as in the 19th century, is consensus. While we can debate the superiority of one cataloging scheme or another, the information world must sit down around its virtual table and agree on a consensus approach that can be published, managed and adopted across the content creation and delivery world. This won’t be easy and the resulting standard will require careful attention to flexibility and extensibility lest it become itself a limiting factor. Nevertheless, our future is tied to this effort, and it must be joined lest we drown in a rising sea of content.

Keeping our Humanity in a Techie World

Naisbitt also believed that the invasion of technology would generate a countervailing reaction by our humanity; he called it “high tech/high touch” and believed it would be society’s way of balancing the potentially dehumanizing effects of all that disconnection, distance and isolation.

Naisbitt believed, for example, that as keyboarded communications grew -- as they have, beyond his wildest expectations -- people would naturally turn to handwriting notes and letters. Similarly, he believed that people would largely reject telecommuting in favor of the human contact possible only in the office.

He was right about the technology part but didn’t foresee technology growing so fast it would overtake the human responses he had postulated: the smart phone and social media have all but taken over personal communications, in many cases at 140 characters or less and a generation raised texting and playing video games in the basement and on their PDAs seems to be just fine with the isolation of remote work.

So although Naisbitt believed that the high touch part of our nature would naturally balance the effects of technology, we are seeing that belief fall to a technology world moving so fast, today’s 2.5ghz PC microprocessor, after all, has more than 12,000 times the power of 1960s mainframes (IBM’s 7074 for example, at .25mhz), that humans adopt each new gadget before their humanity has a chance to catch up.

But the specter of technology-driven isolation is real and Issac Azimov’s chilling 1953 novel Cave’s of Steel (soon to be a movie) in which no one ventured outside, having become so agoraphobic that they couldn’t be in the open or around others, seems increasingly less fantastic.

Technology Without the Technologists?

Finally, Naisbitt saw the anomaly of a coming collision between growing demand for people who could design, build and use technology, and an education system that even then was producing a shrinking pool of graduates capable of or interested in taking those positions: “As we move into a more and more literacy intensive society, our schools are giving us an increasingly inferior product.”

He chronicles the various attempts by corporations, universities and educational organizations to understand and deal with the growing chasm between need and capability, citing some authorities’ assertion back then that use of computers should be mandatory in schools. Since 1982, we have seen just this trend; virtually all schools from post-K through college base a portion of their work on in-class and home computers. Yet our educational performance continues to decline as does our worldwide ranking in critical disciplines.

Although Naisbitt couldn’t have seen it in practice back then, we may be dealing with the dilemma that putting computers in the classroom tends to make students better able to use computers, but at the same time, provides instant answers to questions, encouraging students to become consumers rather than thinkers.

Which Way Do We Go?

So what, you might ask, does all this mean? I think it means that the trends we are seeing were not completely unforeseen but our response hasn’t followed suit. Society has dealt with new technology since the days of the Luddites, but never so rapidly that the culture finds itself drowning in the new ways with little or no countervailing response.

Naisbitt’s high tech/high touch is important not so much for how accurately he predicted technology’s influence, but instead for his having understood we could not as a society cope with wave upon wave of technology without a balance lest we lose part of ourselves.

While Naisbitt thought culture would adjust naturally to balance the rush of technology, we can see that it has not. If we are to stay human, it seems we must do it for ourselves. From education to government to families to business rules, we must find and implement ways of identifying and preserving those values, connections and abilities that make us who we are.

Failing that, we may find ourselves looking in the proverbial mirror someday and not at all liking what we see.

Image courtesy of Kuzmin Andrey (Shutterstock)

Editor's Note: Another article by Barry Schaeffer you may enjoy:

-- To Cloud or not to Cloud: Two Narratives