Analytics Made Easy - StatCounter

Tech World – Welcome to the Digital Revolution

By Saima Batool.

Predicting the future is hard, so let’s start by explaining the past. What’s the best lens for evaluating the arc of world history during the nineteenth century? For starters, it’s the dawn of liberal democracy. The French have already guillotined their king, and a handful of John Locke enthusiasts across the Atlantic have established a nascent republic. In the United Kingdom, the philosopher John Stuart Mill is ably defending liberal democracy and human dignity. It’s starting to look like monarchy has had its day. Then there’s the laissez-faire capitalist revolution, starring such economists as Thomas Malthus and David Ricardo. Karl Marx is bringing economics to the proletariat.

The nineteenth century is also the height of Western empire and colonization. It’s the start of the era of total war. It’s the beginning of the decline of religion as a political force and its replacement with the rise of nationalism. It’s also, if one squints hard enough, the start of the era of human equality. Women demand equal rights in Seneca Falls, New York, and New Zealand becomes the first country to give them the vote. The United Kingdom outlaws the slave trade, the United States emancipates its slaves, and Russia frees its serfs.

So: democracy, capitalism, colonization, modern war, nationalism, and human equality. All of them vast in their implications, and all of them the catalyst for thousands of books. And none of them mattered. When looking back today, the most important geopolitical feature of the nineteenth century is obvious: it was the era of the Industrial Revolution. Without it, there’s no rising middle class and no real pressure for democracy. There’s no capitalist revolution because agrarian states don’t need one. There’s no colonization at scale because there’s a hard limit to a nonindustrial economy’s appetite for raw materials. There’s no total war without cheap steel and precision manufacturing. And with the world still stuck largely in a culture and an economy based on traditional subsistence agriculture, there’s quite possibly no end to slavery and no beginning of feminism.

The key drivers of this era were the steam engine, germ theory, electricity, and railroads. Without the immense economic growth they made possible in the twentieth century, everything else would matter about as much as if it had happened in the Middle Ages. Nobody knew it in 1800, but the geopolitical future of the nineteenth century had already been set in motion nine decades earlier, when Thomas Newcomen invented the first practical steam engine. Historians and foreign policy experts may not like to hear it, but all the things they teach and write about the geopolitics of the nineteenth century are mere footnotes to the Industrial Revolution. And exactly the same thing is likely to be true when we—or our robot descendants—write the history of the digital revolution of the twenty-first.

A true technological revolution would increase the overall productivity of the global economy, just as it did during the Industrial Revolution, when machines allowed companies to produce vastly more goods with the same number of people.


It’s not possible to itemize the great currents of twenty-first-century geopolitics with the same confidence as those of the nineteenth, but there are a few obvious ones. There’s the rise of China. There’s increased political tribalism and a possible breakdown of liberal democracy on the horizon. In the nearer term, there’s jihadist terrorism. And in the era of U.S. President Donald Trump, it’s hard not to wonder if the world is headed toward a future of declining cooperation and a return to naked, zero-sum great-power competition. But with the usual caveat that accompanies every prediction about the twenty-first century—namely, that it depends on humans still being around—none of these forces really matters, either. Right now, the world is at the dawn of a second Industrial Revolution, this time a digital revolution. Its impact will be, if anything, even greater than that of the first.

That said, this revolution hasn’t started yet. The marvels of modern technology are everywhere, but so far, all that has been invented are better toys. A true technological revolution would increase the overall productivity of the global economy, just as it did during the Industrial Revolution, when machines allowed companies to produce vastly more goods with the same number of people. That is not occurring now. After a big decline in the 1970s, labor productivity growth inched steadily upward through 2007—mostly thanks to the widespread adoption of computerized logistics and global supply chains in the business community—and then sank. Despite today’s technological marvels, productivity growth has been stubbornly sluggish for the past decade, which suggests that the latest generation of machines is not truly accomplishing much. But all of this is on the verge of changing. Artificial intelligence, or AI, has been an obsession of technologists practically since computers were invented, but the initial naive optimism of the 1950s quickly gave way to the “AI winter” of the 1970s, as it became clear that the computers of the time lacked the raw processing power needed to match the human brain. But just as Moore’s law predicted, computer power kept doubling every year or two, and so did advances in AI. Neural networks gave way to expert systems, which in turn gave way to machine learning. That resulted in computers that could read printed words and do a better job of searching the Internet, but the holy grail of AI—a computer that could pass for a human being in normal conversation—remained elusive.

Even today, AI is still in its prenatal phase—answering Jeopardy! questions, winning at chess, finding the nearest coffee shop—but the real thing is not far off. To get there, what’s needed is hardware that’s as powerful as the human brain and software that can think as capably. After decades of frustration, the hardware side is nearly ready: the most powerful computers in the world are already as powerful as the human brain. Computer power is normally measured in floating point operations per second, or “flops,” and the best estimates today suggest that the human brain has an effective computing power of about ten to 100 petaflops (quadrillions of operations per second). As it happens, the most powerful computers in the world right now are also rated at about ten to 100 petaflops. Unfortunately, they’re the size of living rooms, cost more than $200 million, and generate annual electricity bills in the neighborhood of $5 million. What’s needed now is to make these supercomputers much smaller and much cheaper. A combination of faster microprocessors, improved custom microchips, a greater ability to conduct multiple calculations in parallel, and more efficient algorithms will close the gap in another couple of decades. The software side is inherently fuzzier, but progress over the past decade has been phenomenal. It’s hard to put solid numbers on software progress, but the people who know the most about AI—the researchers themselves—are remarkably optimistic. In a survey of AI experts published in 2017, two-thirds of respondents agreed that progress had accelerated in the second half of their careers. And they predicted about a 50 percent chance that AI would be able to perform all human tasks by 2060, with the Asian respondents figuring that it could do so closer to 2045.

These researchers don’t think that machines will be able to perform only routine work; they will be as capable as any person at everything from flipping burgers to writing novels to performing heart surgery. Plus, they will be far faster, never get tired, have instant access to all of the world’s knowledge, and boast more analytic power than any human. With luck, this will eventually produce a global utopia, but getting there is going to be anything but. Starting in a couple of decades, robots will put millions of people out of work, and yet the world’s economic and political systems are still based on the assumption that laziness is the only reason not to have a job. That’s an incendiary combination.


Make no mistake: the digital revolution is going to be the biggest geopolitical revolution in human history. The Industrial Revolution changed the world, and all it did was replace human muscle. Human brains were still needed to build, operate, and maintain the machines, and that produced plenty of well-paying jobs for everyone. But the digital revolution will replace the human brain. By definition, anything a human can do, human-level AI will also be able to do—but better. Smart robots will have both the muscle to do the work and the brainpower to run themselves. Putting aside airy philosophical arguments about whether a machine can truly think, they will, for all practical purposes, make Homo sapiens obsolete. Every other twenty-first-century geopolitical trend will look piddling by comparison. Take the rise of China. Millions of words have been spilled on this development, covering Chinese history, culture, demographics, and politics. All of that will matter over the course of the next 20 years or so, but beyond that, only one thing will matter: Will the Chinese have the world’s best AI? If they do, then they will take over the world if they feel like it. If they do not, then they won’t.

Jihadist terrorism? Even if it holds on for another decade or so—which is doubtful, given its steadily diminishing success since 9/11—it will soon become a victim of AI. Dumb drones, paired up with machine analysis of massive databases of signals intelligence, have already set terrorist groups back on their heels. As the drones become more capable and the guidance software becomes smarter, no low-tech organization will stand a chance of survival. More generally, warfare itself will become entirely machine-driven. Paradoxically, this might make war obsolete. What’s the point of fighting when there’s no human bravery or human skill required? Besides, countries without AI will know they have no chance of winning, whereas those countries with top-level AI will have better ways of getting what they want. Aircraft carriers and cruise missiles will give way to subtle propaganda campaigns and all-but-undetectable cyberwarfare. Then there’s liberal democracy. It is already under stress—on the surface, due to anti-immigrant sentiment, and on a deeper level, due to general anxiety about jobs. That is partly what propelled Trump to the presidency. But what has happened so far is just the mild tremor that precedes the tsunami to come. Within a decade, there is a good chance that nearly all long-haul truckers will be out of work thanks to driverless technology. In the United States, that’s two million jobs, and once AI is good enough to drive a truck, it will probably be good enough to do any other job a truck driver might switch to. How many jobs will eventually be lost, and how quickly will they disappear? Different experts offer different estimates of job losses, but all agree that the numbers are frighteningly large and the time frames are frighteningly short. A 2017 analysis by the auditing firm PwC predicted that 38 percent of all jobs in the United States are “at high risk of automation by the early 2030s,” most of them routine occupations, such as forklift operator, assembly-line worker, and checkout clerk. By the 2040s, AI researchers project, computers will be able to conduct original math research, perform surgery, write best-selling novels, and do any other job with similar cognitive demands. In a world where ten percent unemployment counts as a major recession and 20 percent would be a global emergency, robots may well perform a quarter or more of all work. This is the stuff of violent revolutions. And unlike the Industrial Revolution, which took more than 100 years to truly unfold, job losses during the digital revolution will accelerate in mere decades. This time, the revolution will take place not in a nation of shopkeepers but in a world of highly sophisticated multinational corporations that chase profits mercilessly. And AI will be the most profitable technology the world has ever seen.


What does all of this mean for politics? In an era of mass unemployment, one could argue that the form of government will be the most important thing in the world, since modern government is mostly about managing and controlling the economy for the greater good. But one could just as easily make the case that it will not matter at all: If robots can produce an unending supply of material goods, what exactly is there to manage and control? The only sure bet is that the form of government that will come out on top is the one that proves most capable of marshaling the power of AI for the most people. Marxists already have plenty of ideas about how to handle this—let robots control the means of production and then distribute the spoils to everyone according to his or her needs—but they don’t have a monopoly on solutions. Liberal democracy still stands a chance, but only if its leaders take seriously the deluge that’s about to hit them and figure out how to adapt capitalism to a world in which the production of goods is completely divorced from work. That means reining in the power of the wealthy, rethinking the whole notion of what a corporation is, and truly accepting—not just grudgingly—a certain level of equality in the allocation of goods and services.

This is a sobering vision. But there’s also some good news here, even in the medium term. The two most important developments of the twenty-first century will be AI-driven mass unemployment and fossil-fuel-driven climate change—and AI might well solve the problem of climate change if it evolves soon enough. After all, the world already has most of the technology needed to produce clean energy: that is, wind and solar power. The problem is that they need to be built out on an enormous scale at huge expense. That’s where cheap, smart robots could come in, constructing a massive infrastructure for almost nothing. And don’t laugh, but once human-level AI is a reality, there’s no reason to think progress will stop. Before long, above-human levels of AI might help scientists finally develop the holy grail of clean energy: nuclear fusion. None of this is going to happen immediately. Today’s technology is to true AI as the Wright Flyer is to the space shuttle. For the next couple of decades, the most important global movements will be all the usual suspects. But after that, AI is going to start making them seem trivial. Great-power competition will basically be a competition between different countries’ AI technology. Tribalism won’t matter: Who cares about identity if all the work is done by robots? Liberal democracy might still matter, but only if it figures out how to deal with mass unemployment better than other systems of government. Religion is going to have some tough times, too, as people’s interactions with the world become increasingly mediated through constructs that seem every bit as thoughtful and creative as humans but rather plainly weren’t constructed by God and don’t seem to have any need for a higher power.

It’s long past time to start taking this stuff seriously. Even technophobes can see which way the wind is blowing—and historically, mass economic deprivation has produced fewer thoughtful progressive reforms than violent revolutions and wars. Needless to say, that doesn’t have to be the case this time around. It may be impossible to halt technology in its tracks, but it is possible to understand what’s coming and prepare for an enlightened response.

Check Also

Huawei’s New Wearable Conceals A Bluetooth Headset

Huawei is back with another refresh to its idiosyncratic TalkBand wearable, with the dispatch of …