A number of this week’s [February 19, 2018] milestones in the history of technology demonstrate society’s reactions to new technologies over the years: A discussion of AI replacing and augmenting human intelligence, a warning about the abundance of misinformation on the internet, and government regulation of a communication platform, suppressing free speech in the name of public interest.
On February 20, 1947, Alan Turing gave a talk at the London Mathematical Society in which he declared that “what we want is a machine that can learn from experience.” Anticipating today’s enthusiasm about machine learning and deep learning, Turing declared that “It would be like a pupil who had learnt much from his master, but had added much more by his own work. When this happens, I feel that one is obliged to regard the machine as showing intelligence.”
Turing also anticipated the debate over the impact of artificial intelligence on jobs: Does it destroy jobs (automation) or does it help humans do their jobs better and do more interesting things (augmentation)? Turing speculated that digital computers will replace some of the calculation work done at the time by human computers. But “the main bulk of the work done by these [digital] computers will however consist of problems which could not have been tackled by hand computing because of the scale of the undertaking.” (Here he was also anticipating today’s most popular word in Silicon Valley: “scale.”)
Advancing (and again, anticipating) the augmentation argument in the debate over AI and jobs, Turing suggested that humans will be needed to assess the accuracy of the calculations done by digital computers. At the same time, he also predicted the automation of high-value jobs (held by what he called “masters” as opposed to the “slaves” operating the computer) and the possible defense mechanisms invented by what today we call “knowledge workers”:
The masters are liable to get replaced because as soon as any technique becomes at all stereotyped it becomes possible to devise a system of instruction tables which will enable the electronic computer to do it for itself…
They may be unwilling to let their jobs be stolen from them in this way. In that case they would surround the whole of their work with mystery and make excuses, couched in well-chosen gibberish, whenever any dangerous suggestions were made.
Turing concluded his lecture with a plea for expecting intelligent machines to be no more intelligent than humans:
One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge, why should we expect more of a machine? Putting the same point differently, the machine must be allowed to have contact with human beings in order that it may adapt itself to their standards.
What Turing did not anticipate was that digital computers will be used by individuals (as opposed to organizations) to pursue their personal goals, including adding to “the body of knowledge” inaccurate information, intentionally or unwittingly.
It was impossible for Turing to anticipate the evolution of the digital computers of his day into personal computers, smart phones, and the internet. That may have prevented him from seeing the possible parallels with radio broadcasting—a new technology that, only two decades before his lecture, allowed individuals to add to the body of knowledge and transmit whatever they were adding to many other people.
On February 23, 1927, President Calvin Coolidge signed the 1927 Radio Act, creating the Federal Radio Commission (FRC), forerunner of the Federal Communications Commission (FCC), established in 1934.
In “The Radio Act of 1927 as an Act of Progressivism,” Mark Goodman writes:
The technology and growth of radio had outpaced existing Congressional regulation, written in 1912 when radio meant ship-to-shore broadcasting. In the 1920s, by mailing a postcard to Secretary of Commerce Herbert Hoover, anyone with a radio transmitter could broadcast on the frequency chosen by Hoover. The airwaves were an open forum where anyone with the expertise and equipment could reach 25 million listeners.
By 1926, radio in the United States included 15,111 amateur stations, 1,902 ship stations, 553 land stations for maritime use, and 536 broadcasting stations. For those 536 broadcasting stations, the government allocated only eighty-nine wave lengths. The undisciplined and unregulated voice of the public interfered with corporate goals of delivering programming and advertising on a dependable schedule to a mass audience.
Congress faced many difficulties in trying to write legislation. No precedent existed for managing broadcasting except the powerless Radio Act of 1912. No one knew in 1926 where the technology was going nor what radio would be like even the next year, so Congress was trying to write the law to cover all potentialities.
Senator Key Pittman of Nevada expressed his frustration to the Senate chair: “I do not think, sir, that in the 14 years I have been here there has ever been a question before the Senate that in the very nature of the thing Senators can know so little about as this subject.”
Nor was the public much better informed, Pittman noted, even though he received telegrams daily urging passage. “I am receiving many telegrams from my State urging me to vote for this conference report, and informing me that things will go to pieces, that there will be a terrible situation in this country that cannot be coped with unless this report is adopted. Those telegrams come from people, most of whom, I know, know nothing on earth about this bill.”
The Radio Act of 1927 was based on a number of assumptions: That the equality of transmission facilities, reception, and service were worthy political goals; the notion that the spectrum belonged to the public but could be licensed to individuals; and that the number of channels on the spectrum was limited when compared to those who wanted access to it.
Concluding his study of the Radio Act, Mark Goodman writes:
Congress passed the Radio Act of 1927 to bring order to the chaos of radio broadcasting. In the process, Congressional representatives had to deal with several free speech issues, which were resolved in favor of the Progressive concepts of public interest, thereby limiting free speech… Congressmen feared radio’s potential power to prompt radical political or social reform, spread indecent language, and to monopolize opinions. Therefore, the FRC was empowered to protect listeners from those who would not operate radio for “public interest, convenience, and necessity.”
Regulation stopped the use of the new communication platform by the masses, but six decades later, a new technology gave rise to a new (and massive) communication platform. The invention of the Web and its rapid proliferation in the 1990s, running as a software layer on top of the 20-year-old internet, connecting all computers (and eventually, smart phones) and facilitating greatly the creation and dissemination of information and mis-information, has brought about another explosion in the volume of additions to the body of knowledge, but this time by millions of people worldwide.
On February 25, 1995, Astronomer and author Clifford Stoll wrote in Newsweek:
After two decades online, I’m perplexed. It’s not that I haven’t had a gas of a good time on the Internet. I’ve met great people and even caught a hacker or two. But today, I’m uneasy about this most trendy and oversold community. Visionaries see a future of telecommuting workers, interactive libraries and multimedia classrooms. They speak of electronic town meetings and virtual communities. Commerce and business will shift from offices and malls to networks and modems. And the freedom of digital networks will make government more democratic.
Baloney. Do our computer pundits lack all common sense? The truth in no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works…
Every voice can be heard cheaply and instantly. The result? Every voice is heard. The cacophony more closely resembles citizens band radio, complete with handles, harassment, and anonymous threats. When most everyone shouts, few listen…
While the Internet beckons brightly, seductively flashing an icon of knowledge-as-power, this nonplace lures us to surrender our time on earth. A poor substitute it is, this virtual reality where frustration is legion and where—in the holy names of Education and Progress—important aspects of human interactions are relentlessly devalued.
Stoll’s was a lonely voice, mostly because the people with the loudest and most dominant voices at the time, those who owned and produced the news, believed like Stoll that “the truth in no online database will replace your daily newspaper.”
They did not anticipate how easy it will be for individuals to start their own blogs and that some blogs will grow into “new media” publications. They did not anticipate that an online database connecting millions of people will not only become a new dominant communication platform but will also start functioning like a daily newspaper.
Nicholas Thompson and Fred Vogelstein just published in Wired a blow-by-blow account of the most recent two years in the tumultuous life of Facebook, a company suffering from a split-personality disorder: Is it a “dumb pipe” (as dominant communication platforms seeking to avoid responsibility for the content they carry liked to call themselves in the past) or a newspaper?
One manifestation of the clash between Facebook’s “religious tenet” that it is “an open, neutral platform” (as Wired describes it) and its desire to crush Twitter (which is why it “came to dominate how we discover and consume news”), was the hiring and firing of a team of journalists, first attempting to use humans to scrub the news and then trusting the machine to do a better job. In a passage illustrating how prescient Turing was, Thompson and Vogelstein write:
…the young journalists knew their jobs were doomed from the start. Tech companies, for the most part, prefer to have as little as possible done by humans—because, it’s often said, they don’t scale. You can’t hire a billion of them, and they prove meddlesome in ways that algorithms don’t. They need bathroom breaks and health insurance, and the most annoying of them sometimes talk to the press. Eventually, everyone assumed, Facebook’s algorithms would be good enough to run the whole project, and the people on Fearnow’s team—who served partly to train those algorithms—would be expendable.
Most of the Wired article, however, is not about AI’s impact on jobs but about Facebook’s impact on the “fake news” meme and vice versa. Specifically, it is about the repercussions of Mark Zuckerberg’s assertion two days after the 2016 presidential election that the idea that fake news on Facebook influenced the election in any way “is a pretty crazy idea.”
The article goes from the dismay of Facebookers of their leader’s political insensitivity to a trio of a security researcher, a venture capitalist (and early investor in Facebook), and a design ethicist, banding together to lead the charge against the evil one and talk “to anyone who would listen about Facebook’s poisonous effects on American democracy,” and to the receptive audiences with “their own mounting grievances” against Facebook—the media and Congress. The solution, for some, is regulation, just like with radio broadcasting in the 1920s: “The company won’t protect us by itself, and nothing less than our democracy is at stake,” says Facebook’s former privacy manager.
All this social pressure was too much for Zuckerberg who made a complete about-face during the company Q3 earnings call last year: “I’ve expressed how upset I am that the Russians tried to use our tools to sow mistrust. We build these tools to help people connect and to bring us closer together. And they used them to try to undermine our values.”
After going through a year of “coercive persuasion” (i.e., indoctrination), Zuckerberg has bought into the fake news that fake news can influence elections and destroy democracy. The Wired article never questions this meme, this dogma, this “filter bubble” where everybody accepts something as a given and never questions it. Most (all?) journalists, commentators, and politicians today never question it.
Wired explains Zuckerberg’s politically incorrect dismissal of the idea that fake news on Facebook can influence elections by describing him as someone who likes “to form his opinions from data.” The analysis given to him before he uttered his inconceivable opinion, according to Wired, “was just an aggregate look at the percentage of clearly fake stories that appeared across all of Facebook. It didn’t measure their influence or the way fake news affected specific groups.”
Excellent point. But the article, like most (all?) articles on the subject, does not provide any data showing the influence of fake news on voters’ actions.
I have good news for Zuckerberg, real news that may help jump-start his recovery from fake news brainwashing, on the way to, yet again, forming his opinions (and decisions) on the basis of facts. The January 2018 issue of Computer, published by the IEEE Computer Society, leads with an article by Wendy Hall, Ramine Tinati, and Will Jennings of the University of Southampton, titled “From Brexit to Trump: Social Media’s Role in Democracy.” It summarizes their study (and related work) of the role of social media in political campaigns. Their conclusion?
“Our ability to understand the impact that social networks have had on the democratic process is currently very limited.”