The Rise of Facebook to the Top of the Social Networking Market

zuckFacebook was a late-comer to the social networking market, defying the business mantra of “first-mover advantage.” Why did it succeed so fabulously where the pioneers of the industry, already with millions of participants, have failed so spectacularly?

On April 15, 1999LiveJournal was launched by 19-year-old Brad Fitzpatrick, as a way of keeping in touch with his high school friends. LiveJournal started as a blogging site that incorporated social networking features inspired by the WELL.

In 2005, when it had 5.5 million users, and was “one of the largest open source projects on the web,” it was sold to blogging software company Six Apart. In 2007, when LiveJournal had 14 million users, it was sold to Russian media company SUP Media which wanted to make it “a major worldwide brand.” In 2012, LiveJournal had close to 2 million active users.

The social networking market was launched in early 1997 by SixDegrees.com, the first website to combine features that existed on other websites at the time: User profiles, listing of “friends,” and the ability to browse friends’ lists, “placing your Rolodex in a central location,” as its founder Andrew Weinreich said at the time. With 3.5 million users, in 1999 it was bought by YouthStream Media Networks which shut it down a year later.

There were many other early movers around the world, but the two prominent ones—and for a while, very successful—were Friendster and MySpace.

One of the biggest disappointments in Internet history,” Friendster was launched in 2002 and reached 3 million users in a few months. It was sold in 2009 to one of Asia’s largest Internet companies, MOL Global, reaching at its peak 115 million users, mostly in Asia. Friendster was shut down in 2015.

“Many early adopters left [Friendster] because of the combination of technical difficulties, social collisions, and a rupture of trust between users and the site,” said danah boyd at the time. The major beneficiary of these defections was MySpace, launched in 2003.

MySpace did not restrict membership in its network, allowed users to customize their pages and to make them public, and continuously added features based on user demand. By the time The Facebook launched at Harvard University in February 2004, MySpace had more than a million users and was becoming America’s dominant social network.

In 2005, MySpace—and its 25 million users—was sold to News Corp. The next year, it reached 100 million users and surpassed Google as the most visited website in the U.S. But it also started to run into serious problems. “Its new owner,” writes Tom Standage in Writing on the Wall, “treated it as a media outlet rather than a technology platform and seemed more interested in maximizing advertising revenue than in fixing or improving the site’s underlying technology.”

In April 2008, MySpace was overtaken by Facebook in terms of the number of unique worldwide visitors, and in May 2009, in the number of unique U.S. visitors. Why did Facebook become the largest and most dominant player in the social networking market?

In business, timing is everything. There is no early-mover advantage, just as there is no late-mover advantage. In Facebook’s case, the market was ready with rising broadband availability and Internet participation by an increasingly diverse audience (meaning entire families could participate in a social network). Early social networks already conditioned consumers to the idea (and possible benefits) of social networking. They also provided Facebook with a long list of technical and business mistakes to avoid.

A major lesson was controlled growth. Avoiding the strong temptation, especially when a social network is concerned, to grow very rapidly, Facebook started as a Harvard-only network, then expanded gradually, in stages, to other universities, high-schools, and corporate users, requiring a verified email address. This—and its clean and non-customizable design—allowed it to establish a reputation as a “safe space,” in contrast to MySpace. Only in September 2006 it opened membership to anyone aged 13 or older.

Controlled growth also meant gradually building a robust and reliable technology infrastructure, avoiding the technical problems that plagued its competitors. Its initial success and reputation helped attract smart and experienced engineers that invented new tools and technologies allowing Facebook to build its proprietary technology platform, optimized to handle the demanding requirements of serving (eventually) hundreds of millions of users simultaneously.

Having smart engineers also help in developing products not for the sake of developing products that engineers love, but that delight users and keep them coming for more. But constantly adding new features to Facebook was not only the product of innovative minds. It was driven by the ruthless competitive spirit of the company’s leadership. That meant copying competitors’ and potential competitors’ best features. Or buying them (and their teams) outright.

“Good artists copy, great artists steal,” Steve Jobs liked to quote Picasso, and Facebook “stole” the obsession with product quality (as perceived by users) and robust (and home-grown) technology infrastructure from Google. These obsessions helped attract more and more users to both companies, both late-comers in their respective markets, and they came to dominate them. But both companies also made sure they have a sustainable and successful business model and to this aim, both focused on re-inventing advertising.

“Half the money I spend on advertising is wasted; the trouble is I don’t know which half,” said John Wanamaker more than a century ago. Google changed this sorry situation by linking ads to specific (and demonstrable) results. Facebook went further by linking ads to specific (and targeted) users. Buttressed by a string of other inventions in selling and distributing ads online, together they dominate digital advertising, a $209 million market worldwide in 2017.

But Facebook has succeeded by adding one more important ingredient, expertly handled by its founder, Mark Zuckerberg: Public Relations. This has been particularly important to the success of a social network serving 2.2 billion active monthly users (as of January 2018) and especially in light of the maniacal focus on an advertising-depended business model based on mining users’ data, content and actions.

Like other social networks, Facebook has found itself facing user revolt, time and again, when it changed its features and policies and when yet another revelation about its disregard for users’ privacy has emerged.

One such example is recounted by David Kirkpatrick in The Facebook Effect. In February 2009, Facebook changed its “terms of use.” The essence of the change was captured by the title of an article in Consumerist“Facebook’s New Terms Of Service: ‘We Can Do Anything We Want With Your Content. Forever.’”

Zuckerberg answered the ensuing fire-storm with a post titled “On Facebook, People Own and Control their Information,” in which he said “In reality, we wouldn’t share your information in a way you wouldn’t want.” (Kirkpatrick and others attribute the post and this quote to Zuckerberg; as of this writing, this Facebook post is attributed to Kathy Chan).

That promise didn’t help quell the uprising and three days later Zuckerberg announced that Facebook is reverting to the previous terms of service. And he wrote (“with a kind of rhetoric you seldom hear from a CEO,” says Kirkpatrick):

History tells us that systems are most fairly governed when there is an open and transparent dialogue between the people who make decisions and those who are affected by them. We believe history will one day show that this principle holds true for companies as well, and we’re looking to moving in this direction with you. [Contemporary links to that post now lead to the current Facebook news page]

In this democratic spirit, Zuckerberg announced that the decision on which direction to move will be submitted to a vote and it would be binding if at least 30% of the 200 million Facebook users (at the time) participate. Only 666,000 votes were cast, with 74% favoring the new Statement of Rights and Responsibilities and Facebook Principles (“These documents reflect comments from users and experts received during the 30-day comment period” as opposed to the Terms of Use, a document that “was developed entirely by Facebook,” the company helpfully explained to voters).

“Internet activists were impressed,” writes Kirkpatrick, and Harvard Law professor Jonathan Zittrain, “wrote an admiring article noting that Zuckerberg had encouraged Facebook’s users to view themselves as citizens—Facebook’s citizens.”

Mastery of public relations, among other things, explains Facebook’s success. It may well help it weather its current crisis of confidence and trust.

Originally published on Forbes.com

Advertisements
Posted in Facebook, social media, Social Networks | Leave a comment

2001: A Space Odyssey and our fascination with (killer) AI

50 years ago, on April 2, 1968, the film 2001: A Space Odysseyhad its world premiere at the Uptown Theater in Washington, D.C. Reflecting the mixed reactions to the film, Renata Adler wrote in The New York Times that it was “somewhere between hypnotic and immensely boring.”

The 160-minute film with only 40 minutes of dialogue went on to become “the movie that changed all movies forever,” as the poster for its 50th anniversary re-release modestly proclaims. There is no doubt, however, in its influence on popular culture, specifically on the widespread belief in the possibility of sentient machines possessed by their killer instincts.

HAL 9000 became known, even by many people who did not watch the movie, as the artificially intelligent computer killing one after the other the astronauts on a mission to Jupiter. Hollywood ending required that the one surviving astronaut, Dave Bowman, will triumph over the human-like machine. “Catchphrases like HAL’s chilling ‘I’m sorry, I can’t do that Dave’,” says Paul Whitington, “entered the popular lexicon.”

The Science-fiction writer and futurist Arthur C. Clarke, who worked with the director Stanley Kubrick on the plot for the movie (and later published a novel with the same title), said in an interview:

Of course the key person in the expedition was the computer HAL, who as everyone said is the only human character in the movie. HAL developed slowly. At one time we were going to have a female voice. Athena, I think was a suggested name. I don’t know again when we changed to HAL. I’ve been trying for years to stamp out the legend that HAL was derived from IBM by the transmission of one letter. But, in fact, as I’ve said, in the book, HAL stands for Heuristic Algorithmic, H-A-L. And that means that it can work on a program’s already set up, or it can look around for better solutions and you get the best of both worlds. So, that’s how the name HAL originated.

That both Clarke and Kubrick have denied any intentional reference to IBM may had to do with the fact that IBM was one of the many organizations and individuals consulted while they created the film. They started the four-year process in 1964, the year IBM introduced its own masterpiece, the computer that sealed the company’s domination of the industry for the next quarter of a century.

On April 7, 1964,  IBM announced the System 360, the first family of computers spanning the performance range of all existing (and incompatible) IBM computers. Thomas J. Watson Jr., IBM’s CEO at the time, wrote in his autobiography Father, Son, & Co.:

By September 1960, we had eight computers in our sales catalog, plus a number of older, vacuum-tube machines. The internal architecture of each of these computers was quite different, different software and different peripheral equipment, such as printers and disk drives, had to be used with each machine. If a customer’s business grew and he wanted to shift from a small computer to a large one, he had to get all new everything and rewrite all his programs often at great expense. …

[The] new line was named System/360—after the 360 degrees in a circle—because we intended it to encompass every need of every user in the business and the scientific worlds. Fortune magazine christened the project “IBM’s $5,000,000,000 Gamble” and billed it as “the most crucial and portentous—as well as perhaps the riskiest—business judgment of recent times.”… It was the biggest privately financed commercial project ever undertaken. The writer at Fortune pointed out that it was substantially larger than the World War II effort that produced the atom bomb.

And like nuclear energy, the System/360 and all the computers and networks of computers that came after it, could be used for creation or for destruction. It was a tool, and as 2001 depicts in the first part of the movie, tools can be used to help humanity or as a weapon in humanity’s wars.

In Profiles of the Future, published in 1962, Arthur Clarke wrote:

The old idea that Man invented tools is… a misleading half-truth; it would be more accurate to say that tools invented Man.  They were very primitive tools… yet they led to us—and to the eventual extinction of the apeman who first wielded them… The tools the apemen invented caused them to evolve into their successor, Homo sapiens. The tool we have invented is our successor. Biological evolution has given way to a far more rapid process—technological evolution. To put it bluntly and brutally, the machine is going to take over.

Talk of the machine taking over has risen to the surface of public discourse over the last 50-plus years each time computer engineers have added yet another “human-like” capability to the tools they create. After IBM’s Watson AI defeated Jeopardy-champion Ken Jennings in 2011, he wrote in Slate:

I understood… why the engineers wanted to beat me so badly: To them, I wasn’t the good guy, playing for the human race. That was Watson’s role, as a symbol and product of human innovation and ingenuity. So my defeat at the hands of a machine has a happy ending, after all. At least until the whole system becomes sentient and figures out the nuclear launch codes…

The fear that machines will figure out the nuclear codes and destroy their creators was called “absurd” by one of 2001’s creators who strongly believed, as we saw above, that they will indeed “take over.” Dismissing the “popular idea” of the malevolent AI killer he went on to co-create a few years later, Clarke wrote in Profiles of the Future:

The popular idea, fostered by comic strips and the cheaper forms of science-fiction, that intelligent machines must be malevolent entities hostile to man, is so absurd that it is hardly worth wasting energy to refute it. I am almost tempted to argue that only unintelligent machines can be malevolent… Those who picture machines as active enemies are merely projecting their own aggressive instincts, inherited from the jungle, into a world where such things do not exist. The higher the intelligence, the greater the degree of cooperativeness. If there is ever a war between men and machines, it is easy to guess who will start it.

According to this materialistic fantasy, which many computer and AI engineers adhere to today, tools create us and will match or surpass human intelligence but will do no harm.

Or maybe not, on both counts: That computers can harm humans on their own and that machines will be as or more intelligent than humans.

Here’s Piers Bizony in Nature:

Certainly, in the film, the surviving astronaut’s final conflict with HAL prefigures a critical problem with today’s artificial-intelligence (AI) systems. How do we optimize them to deliver good outcomes? HAL thinks that the mission to Jupiter is more important than the safety of the spaceship’s crew. Why did no one program that idea out of him? Now, we face similar questions about the automated editorship of our searches and news feeds, and the increasing presence of AI inside semi-autonomous weapons…

Should we watch out for superior “aliens” closer to home, and guard against AI systems one day supplanting us in the evolutionary story yet to unfold? Or does the absence of anything like HAL, even after 50 years, suggest that there is, after all, something fundamental about intelligence that is impossible to replicate inside a machine?

Originally published on Forbes.com

Posted in Artificial Intelligence | Leave a comment

The 3 Eras of IT: Computing, Communications, Communities

smart-parking

A number of this week’s milestones in the history of technology trace the shift in the focus of the “computer industry” from computing to communications to communities and the shifting fortunes of key players such as IBM, Apple, and Google.

On April 1, 1976, Steve Jobs, Steve Wozniak, and Ronald Wayne signed a partnership agreement that established the company that will become Apple Computer, Inc. on January 3, 1977. The purpose of the new company was to commercialize the Apple I, a personal computer for hobbyists developed in 1976.

That was an early milestone in the life of a new “industry,” the market for personal computers. Another one happened on March 26, 1976, when the first PC convention, the World Altair Computer Convention, was held at the Airport Marina Hotel near Albuquerque, New Mexico. The MITS Altair 8800 was a build-it-yourself microcomputer kit designed in 1975. It became a hit among hobbyists after it was featured on the cover of the January 1976 issue of Popular Electronics magazine.  The keynote speaker at the convention was Bill Gates, then a 20-year-old Harvard University student and co-developer of BASIC for the Altair.

Thirty years after the incorporation of Apple Computer Inc., on January 9, 2007, Steve Jobs introduced to the world to the iPhone and also announced that the company would from that point on be known as Apple Inc., because computers were no longer its main focus. Apple, which became the most valuable company in the world five years later, did not fit anymore into any specific “industry” pigeonhole.

When asked if the iPhone represented the convergence of computing and communications, Jobs answered: “I don’t want people to think of this as a computer. I think of it as reinventing the phone.” Apple was entering a new “industry” and it was important for this consummate salesman to put a stake in the new playground for his company and to re-orient his company to operating as a “consumer electronics” company.

But the labels did not matter, nor did buzzwords such as “convergence.” Digitization was the driving force, transforming into zeros and ones all “computing” and “communications.”

iPhone-type communications started on March 27, 1899, when Guglielmo Marconi received the first wireless signal transmitted across the English Channel, sent from Wimereux, France, to his ship-to-shore station at the South Foreland Lighthouse outside Dover, England. The signal was a test held at the request of the French Government which was considering licensing the invention in France.

Digital computation commenced on March 31, 1939, when IBM signed an agreement with Harvard University to build the Automatic Sequence Controlled Calculator (ASCC), later called Mark I, a general purpose electro-mechanical computer, proposed in 1937 by Harvard’s Howard Aiken. The agreement called for IBM to construct for Harvard “an automatic computing plant comprising machines for automatically carrying out a series of mathematical computations adaptable for the solution of problems in scientific fields.”

The dedication of the Mark I on August 7, 1944, say historians Martin Campbell-Kelly and William Aspray, “captured the imagination of the public to an extraordinary extent and gave headline writers a field day. American Weekly called it ‘Harvard Robot Super-Brain’ and Popular Science Monthly declared ‘Robot Mathematician Knows All the Answers.’”

On March 31, 1951, a ceremony in Philadelphia marked the first sale—to the U.S. Census Bureau—of the UNIVAC I (UNIVersal Automatic Computer I), the first commercial computer developed in the U.S. The fifth UNIVAC unit (built for the U.S. Atomic Energy Commission) was used by CBS to predict the result of the 1952 presidential election. It was the first time two of the major television networks used computers to predict the election results:

“The radio and TV networks hope to end the suspense as quickly as possible on election night …. CBS has arranged to use Univac, an all-electronic automatic computer known familiarly as the ‘Giant Brain.’ Because it is too big (25,000 lbs.) to be moved to Manhattan, CBS will train a TV camera on the machine at Remington Rand’s offices in Philadelphia …. NBC has its own smaller electronic brain … Monrobot …. Says ABC’s News Director John Madigan, professing a disdain for such electronic gimmicks: ‘We’ll report our results through Elmer Davis, John Daly, Walter Winchell, Drew Pearson—and about 20 other human brains.’” –“Univac & Monrobot,” Time Magazine, October 27, 1952.

“When CBS hired a newly minted Univac to analyze the vote in the 1952 presidential election, network officials thought it a nifty publicity stunt. But when the printout appeared, an embarrassed Charles Collingwood reported that the machine couldn’t make up its mind. It was not until after midnight that CBS confessed the truth: Univac had correctly predicted Dwight Eisenhower would swamp Adlai Stevenson in one of the biggest landslides in history, but nobody believed it.” –Philip Elmer-Dewitt, “Television Machines That Think,” Time Magazine, April 6, 1992

Television brought the computer industry to the masses before they could tap into the digital devices on their desktops or in their hands. But long before that, they already could participate in many-to-many communications, in creating and fostering their online communities. On April 1, 1985, Stewart Brand and Larry Brilliant launched The WELL (Whole Earth ‘Lectronic Link), one of the first online communities which had a far-reaching impact on the nascent culture of the Internet.

The invention of the World Wide Web in 1989, put a layer of software over the all-digital network of computing and communications, the Internet. That gave rise to new companies, competing with IBM and Apple for dominance of the new all-digital platform catering to both businesses and consumers.

On April 1, 2004, one of these new companies launched Gmail, as an invitation-only beta. The launch was initially met with wide-spread skepticism due to Google’s long-standing tradition of April Fools’ jokes. Google’s press release said: “Google Gets the Message and Launches Gmail. A user complaint about existing email services lead Google to create search-based Webmail. Search is number two online activity and email is number one: ‘Heck, Yeah,’ said Google Founders.” Gmail officially exited beta status on July 7, 2009 at which time it had 170 million users worldwide. As of July 2017, Gmail had 1.2 billion users.

Google had tremendous success launching applications running on top of the Web such as search and email and benefiting from an innovative advertising system. It has failed spectacularly in copying Facebook’s success in moving to the next phase of the industry, extending The Well into a worldwide bulletin board, connecting people with existing relationships, by common interests, and with ad-hock communities.

Open software and “the cloud” have created another type of community, a business community, but there Google has also failed grasp early the market opportunity where Amazon, and then Microsoft, succeeded in establishing a formidable presence.

The future will be dominated by the companies that manage to establish the all-digital network as the foundation for their unique, proprietary, addictive, all-inclusive community.

Originally published on forbes.com

 

 

 

Posted in Community, Computer history, IT history | Leave a comment

Twitter as Entertainment

twitterOne of this week’s milestones in the history of technology, the launch of Twitter, sheds light on the way we live now—deriving social status and enjoyment by playing games and gaining popularity on the Internet, including creating and spreading fake news.

A dozen years ago this week, the first tweet was posted. Nick Bilton in Hatching Twitter:

On March 21, 2006, at 11:50 A.M., Jack [Dorsey] twitted, “just setting up my twttr” …  it all started to come together. Jack’s concept of people sharing their status updates; Ev [Williams]’s and Biz [Stone]’s suggestion to make updates flowed into a stream, similar to Blogger; Noah [Glass] adding timestamps, coming up with the name, and verbalizing how to humanize status by “connecting” people; and finally, friendships and the idea of sharing with groups that had percolated with Odeo and all the people who had worked there… [Later that evening] they were like a bunch of children at a sleepover wishing each other good night. Like a group of friends talking about what they had done that evening, they all sat separately, together, having a conversation. Tweeting.

In the years that follow, sharing status updates has transformed into a highly visible measurement of popularity, with the number of twitter followers reflecting one’s social status, influence, celebrity. For many Twitter users, it has become a source of entertainment, not as passive consumers of someone else’s creative output (as when they watched TV or read a newspaper), but as active players, creators, fully engaged producers.

As happened with traditional media, the long-time exclusive domain of reporters and broadcasters, Twitter users seized the opportunity to make a name for themselves not only by reporting their and their communities’ news, but also by creating news, fake or otherwise, and helping spread other people’s news, the more provocative the better. Gaining a few or many Twitter followers as a result, seeing their Tweet retweeted many times and being debated or commented on was the Internet version of being the most popular kid at a sleepover. Their tweet or retweet becoming “viral” was the equivalent of being the high-school’s coolest kid for a day.

In “The spread of true and false news online,” just published in Science, MIT’s Soroush Vosoughi, Deb Roy, and Sinan Aral investigated “the differential diffusion of all of the verified true and false news stories distributed on Twitter from 2006 to 2017” by examining about 126,000 stories tweeted by about 3 million people more than 4.5 million times. They found that “falsehood diffused significantly farther, faster, deeper, and more broadly than the truth in all categories of information, and the effects were more pronounced for false political news… Whereas false stories inspired fear, disgust, and surprise in replies, true stories inspired anticipation, sadness, joy, and trust. Contrary to conventional wisdom, robots accelerated the spread of true and false news at the same rate, implying that false news spreads more than the truth because humans, not robots, are more likely to spread it.”

The researchers thought that false news (they did not want to use the term “fake news” because it has become too politicized) spread faster than true stories because they were “novel,” noting that new information “conveys social status on one that is ‘in the know’ or has access to unique ‘inside’ information.” But they also noted that “the emotions expressed in reply to falsehoods may illuminate additional factors, beyond novelty, that inspire people to share false news.”

Additional factors? How about the sheer enjoyment derived from playing games? How about the satisfaction that is derived from playing games that increase your social status in a measurable way visible to a global audience of millions of Twitter users? How about the comfort derived from feeling less lonely or from making another person afraid or disgusted?

This little bit of enjoyment Twitter brings to some people’s lives is under threat (in the name of “democracy,” no less). The MIT researchers mention “misinformation-containment policies” but do not elaborate. But the authors of another Science article, “The Science of Fake News,” know exactly what’s to be done:

What interventions might be effective at stemming the flow and influence of fake news? We identify two categories of interventions: (i) those aimed at empowering individuals to evaluate the fake news they encounter, and (ii) structural changes aimed at preventing exposure of individuals to fake news in the first instance.

They suggest these measures to stem the flow and influence of fake news after writing, in the same article, that

Evaluations of the medium-to-long–run impact on political behavior of exposure to fake news (for example, whether and how to vote) are essentially nonexistent in the literature. The impact might be small—evidence suggests that efforts by political campaigns to persuade individuals may have limited effects.

The authors of this article no doubt believe—with 48% of the American public, according to a Gallup/Knight Foundation survey—that the news media is primarily responsible for ensuring people have an accurate and politically balanced understanding of the news by virtue of how they report the news and what stories they cover. Which means, if Twitter (and Facebook) are categorized as “news media,” measures to protect the public must be taken.

Another 48% of the public, according to the same survey, does not think it needs protection because it believes in individual responsibility. They think that the primary responsibility lies with individuals themselves by virtue of what news sources they use and how carefully they evaluate the news.

In his book, Nick Bilton traces the evolution of Twitter from asking the question “What are you doing?” to asking “What’s happening?” Twitter has become an important news source, its users often beating established media to breaking news. But Twitter users took it further by answering questions like “what’s your opinion about X?” and “what news can you make up?” It’s just that it wasn’t called “fake news” at the time. Bilton describes a 2009 meeting between Al Gore and two of Twitter’s founders in which Gore tried to convince them to pursue a merger of Twitter with his Current TV.

Bilton: “After the [presidential] debates had concluded, and Barack Obama had tweeted about winning the 2008 presidential election, Gore immediately saw how compelling the combination had been: people making fun of Sarah Palin in real time, debunking false statements by both candidates, rooting for their home team. “

But the right guy was elected president so there was no need for investigations, academic or otherwise. Twitter was golden, a new and exciting platform to distribute news, a new pillar in the important foundation of democracy provided by the news media.

This elevated status of what was launched as a personal news broadcasting medium was reinforced at the start of the so-called “Arab Spring” in 2011. Twitter was now perceived not only as a pillar of democracy in the United States but as one spreading democracy to the people clamoring for it in the Middle East, helping them topple tyrants. That turned out to be real “fake news.”

There is no reason to take measures to protect the public from fake news on Twitter (or Facebook) because there is no evidence of any causal relationships between mis-information or being nasty to Sarah Palin or Hillary Clinton and the state of American democracy. As a matter of fact, there is strong evidence that there is no connection whatsoever.

After paying a visit to the United States, Charles Dickens described (in Martin Chuzzlewit, 1844) the newsboys greeting a ship in New York Harbor: “’Here’s this morning’s New York Stabber! Here’s the New York Family Spy! Here’s the New York Private Listener! … Here’s the full particulars of the patriotic loco-foco movement yesterday, in which the whigs were so chawed up, and the last Alabama gauging case … and all the Political, Commercial and Fashionable News. Here they are!’ … ‘It is in such enlightened means,’ said a voice almost in Martin’s ear, ‘that the bubbling passions of my country find a vent.’”

Somehow, American democracy survived the partisan, sensational, filter bubble-oriented, fake news-filled news media of the 19th century. It will survive Twitter, Facebook, and “objective” news reporters and broadcasters. Unless, of course, a significant majority of Americans will come to believe that independent thinking is so 19th century and that the news media, not they, have the primary responsibility for ensuring they have “an accurate and politically balanced understanding of the news.”

Originally published on Forbes.com

Posted in Twitter | Leave a comment

The Origins of Blue Origin SpaceX Race and the Broadcom ‏Qualcomm Fight

space-x-blue-origin.jpg

This week’s milestones in the history of technology include communicating through the ether and travelling through space.

On March 16, 1926, Robert Goddard launched the world’s first liquid-fueled rocket. Goddard and his team launched 34 rockets between 1926 and 1941, achieving altitudes as high as 2.6 kilometers (1.6 miles) and speeds as high as 885 km/h (550 mph). Six year earlier, an editorial in The New York Times scoffed at Goddard’s assertion that it’s possible to send a rocket to the Moon, and called into question his understanding of physics:

That Professor Goddard, with his “chair” in Clark College and the countenancing of the Smithsonian Institution, does not know the relation of action and reaction, and of the need to have something better than a vacuum against which to react—to say that would be absurd. Of course he only seems to lack the knowledge ladled out daily in high schools.

Arthur C. Clarke in Profiles of the Future: “Right through the 1930s and 1940s, eminent scientists continued to deride the rocket pioneers—when they bothered to notice them at all… The lesson to be learned … is one that can never be repeated too often and is one that is seldom understood by laymen—who have an almost superstitious awe of mathematics. But mathematics is only a tool, though an immensely powerful one. No equations, however impressive and complex, can arrive at the truth if the initial assumptions are incorrect.”

Forty-nine years after its editorial mocking Goddard, on July 17, 1969—the day after the launch of Apollo 11—The New York Times published a short item under the headline “A Correction.” The three-paragraph statement summarized its 1920 editorial, and concluded:

Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th Century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere. The Times regrets the error.

On March 18, 1909, in Denmark, Einar Dessau used a shortwave transmitter to converse with a government radio post about six miles away in what is believed to have been the first broadcast by a ‘ham’ radio operator.

Susan Douglas in Inventing American Broadcasting on the emergence in the U.S of a “grass-roots network of boys and young men,” amateur radio operators, between 1906 and 1912:

To the amateurs, the ether was neither the rightful province of the military nor a resource a private firm could appropriate and monopolize. The ether was, instead, an exciting new frontier in which men and boys could congregate, compete, test their mettle, and be privy to a range of new information… This realm… belonged to ‘the people.’ Thinking about the ether this way, and acting on such ideas on a daily basis, was a critical step in the transformation of wireless into radio broadcasting.

From the forward (by Jack Binn) to The Radio Boys First Wireless (1922): “Don’t be discouraged because Edison came before you. There is still plenty of opportunity for you to become a new Edison, and no science offers the possibilities in this respect as does radio communications.”

Today, as Singapore-based Broadcom has launched a $117 billion hostile takeover of US-based Qualcomm, there is “so much scrapping… by companies and countries over a next wave of wireless technology known as 5G.” And Jeff Bezos’ Blue Origin is hiring an Astronaut Experience Manager as it is “inching closer to launching tourists into sub-orbital space,” while Elon Musk’s SpaceX is aiming to colonize Mars, envisioning that “millions of people [will] live and work in space.”

Originally published on Forbes.com

Posted in Predictions, Wireless | Leave a comment

Yesterday’s Futures of Work

InventionOfStockingLoom

Alfred Elmore, The Invention of the Stocking Loom, 1846 (Source: British Museum)

A number of this week’s milestones in the history of technology connect accidental inventors and the impact of their inventions on work and workers.

On March 11, 1811, the first Luddite attack in which knitting frames were actually smashed occurred in the Nottinghamshire village of Arnold. Kevin Binfield in Writings of the Luddites: “The grievances consisted, first, of the use of wide stocking frames to produce large amounts of cheap, shoddy stocking material that was cut and sewn rather than completely fashioned and, second, of the employment of ‘colts,’ workers who had not completed the seven-year apprenticeship required by law.”

Back in 1589, William Lee, an English clergyman, invented the first stocking frame knitting machine, which, after many improvements by other inventors, drove the spread of automated lace making at the end of the 18th century. Legend has it that Lee had invented his machine in order to get revenge on a lover who had preferred to concentrate on her knitting rather than attend to him (as depicted by Alfred Elmore in the 1846 painting The Invention of the Stocking Loom).

Lee demonstrated his machine to Queen Elizabeth I, hoping to obtain a patent, but she refused, fearing the impact on the work of English artisans: “Thou aimest high, Master Lee. Consider thou what the invention could do to my poor subjects. It would assuredly bring to them ruin by depriving them of employment, thus making them beggars” (quoted in Why Nations Fail by Daron Acemoglu and James Robinson).

Another accidental inventor was Alexander Graham Bell. His father, grandfather, and brother had all been associated with work on elocution and speech and both his mother and wife were deaf, influencing Bell’s research interests and inventions throughout his life. Bell’s research on hearing and speech led him to experiment with the transmission of sound by means of electricity, culminating on March 7, 1876, when he received a US patent for his invention of (what later will be called) the telephone. Three days later, on March 10, 1876, Bell said into his device: “Mr. Watson, come here, I want you.” Thomas Watson, his assistant, sitting in an adjacent room at 5 Exeter Place, Boston, answered: “Mr. Bell, do you understand what I say?”

Later that day, Bell wrote to his father (as Edwin S. Grosvenor and Morgan Wesson recount in Alexander Graham Bell): “Articulate speech was transmitted intelligibly this afternoon. I have constructed a new apparatus operated by the human voice. It is not of course complete yet—but some sentences were understood this afternoon… I feel I have at last struck the solution of a great problem—and the day is coming when telegraph wires will be laid to houses just like water or gas—and friends converse with each other without leaving home.”

The telephone was adopted enthusiastically in the US, but there were doubters elsewhere, questioning its potential to re-engineer how businesses communicated. In 1879, William Henry Preece, inventor and consulting engineer for the British Post Office, could not see the phone succeeding in Britain because he thought the new technology could not compete with cheap labor:“…there are conditions in America which necessitate the use of instruments of this kind more than here. Here we have a superabundance of messengers, errand boys, and things of that kind.”

The telephone not only ended the careers of numerous messenger boys around the world but also led to the total demise of the telegraph operator. On January 25, 1915, Bell inaugurated the first transcontinental telephone service in the United States with a phone call from New York City to Thomas Watson in San Francisco. Bell repeated the words of his first-ever telephone call on March 10, 1876. In 1915, Watson replied, “It would take me a week to get to you this time.”

The telephone, while destroying some jobs, created other new occupations such as the telephone operator. But this very popular job among young girls also became eventually the victim of yet another accidental inventor.

On March 10, 1891, Almon Brown Strowger, an American undertaker, was issued a patent for his electromechanical switch to automate telephone exchanges. Steven Lubar in InfoCulture: “…a Kansas City undertaker, Strowger had a good practical reason for inventing the automatic switchboard. Legend has it that his telephone operator was the wife of a business rival, and he was sure that she was diverting business from him to her husband. And so he devised what he called a ‘girl-less, cuss-less’ telephone exchange.”

The first automatic switchboard was installed in La Porte, Indiana, in 1892, but they did not become widespread until the 1930s. Anticipating future reactions to some of the inventions of the computer age, shifting work to the users was not received enthusiastically by them. But AT&T’s top-notch propaganda machine got over that inconvenience by predicting that before long, more operators would be needed than there were young girls suitable for the job.

But both AT&T and its users were ambivalent about switching to automatic switching. While users were not happy about working for no pay for the phone company, they also valued the privacy accorded to them by the automatic switchboard. And AT&T was interested in preserving its vast investment in operator-assisted switching equipment. Richard John in Network Nation:

To rebut the presumption that Bell operating companies were wedded to obsolete technology, Bell publicists lauded the female telephone operator as a faithful servant… The telephone operator was the “most economical servant”–the only flesh-and-blood servant many telephone users could afford…. The idealization of the female telephone operator had a special allure for union organizers intent on protecting telephone operators from technological obsolescence. Electromechanical switching … testified a labor organizer in 1940… was “inanimate,” “unresponsive,” and “stupid,” and did “none of the things which machinery is supposed to do in industry”–making it a “perfect example of a wasteful, expensive, inefficient, clumsy, anti-social device.”

The transistor, invented in 1946 at AT&T to improve switching, led to the rise and spread of computerization, and to making the switching system essentially a computer. By 1982, almost half of all calls were switched electronically. The transistor also took computerization from the confines of deep-pocketed corporations and put it in hands of hobbyists.

On March 5, 1975, The Homebrew Computer Club met for the first time, with 32 “enthusiastic people” attending. Apple’s co-founder Steve Wozniak:

Without computer clubs there would probably be no Apple computers. Our club in the Silicon Valley, the Homebrew Computer Club, was among the first of its kind. It was in early 1975, and a lot of tech-type people would gather and trade integrated circuits back and forth. You could have called it Chips and Dips…

 

The Apple I and II were designed strictly on a hobby, for-fun basis, not to be a product for a company. They were meant to bring down to the club and put on the table during the random access period and demonstrate: Look at this, it uses very few chips. It’s got a video screen. You can type stuff on it. Personal computer keyboards and video screens were not well established then. There was a lot of showing off to other members of the club. Schematics of the Apple I were passed around freely, and I’d even go over to people’s houses and help them build their own.

 

The Apple I and Apple II computers were shown off every two weeks at the club meeting. “Here’s the latest little feature,” we’d say. We’d get some positive feedback going and turn people on. It’s very motivating for a creator to be able to show what’s being created as it goes on. It’s unusual for one of the most successful products of all time, like the Apple II, to be demonstrated throughout its development.

Apple and other PC makers went on to make a huge impact on workers and how work gets done (and later, on how consumers live their lives). It was difficult for “experts,” however, to predict the exact nature of which workers will be impacted and how.

Yesterday’s futures reveal a lot about what did not happen and why it didn’t. I have in my files a great example of the genre, a report published in 1976 by the Long Range Planning Service of the Stanford Research Institute (SRI), titled “Office of the Future.”

The author of the report (working not far away from where the Homebrew Computer club was meeting) was a Senior Industrial Economist at SRI’s Electronics Industries Research Group, and a “recognized authority on the subject of business automation.” His bio blurb indicates that he “also worked closely with two of the Institute’s engineering laboratories in developing his thinking for this study. The Augmentation Research Center has been putting the office of the future to practical test for almost ten years… Several Information Science Laboratory personnel have been working with state-of-the-art equipment and systems that are the forerunners of tomorrow’s products. The author was able to tap this expertise to gain a balanced picture of the problems and opportunities facing office automation.”

And what was the result of all this research and analysis? The manager of 1985, the report predicted, will not have a personal secretary. Instead he (decidedly not she) will be assisted, along with other managers, by a centralized pool of assistants (decidedly and exclusively, according to the report, of the female persuasion). He will contact the “administrative support center” whenever he needs to dictate a memo to a “word processing specialist,” find a document (helped by an “information storage/retrieval specialist”), or rely on an “administrative support specialist” to help him make decisions.

Of particular interest is the report’s discussion of the sociological factors driving the transition to the “office of the future.” Forecasters often leave out of their analysis the annoying and uncooperative (with their forecast) motivations and aspirations of the humans involved. But this report does consider sociological factors, in addition to organizational, economic, and technological trends. And it’s worth quoting at length what it says on the subject:

“The major sociological factor contributing to change in the business office is ‘women’s liberation.’ Working women are demanding and receiving increased responsibility, fulfillment, and opportunities for advancement. The secretarial position as it exists today is under fire because it usually lacks responsibility and advancement potential. The normal (and intellectually unchallenging) requirements of taking dictation, typing, filing, photocopying, and telephone handling leave little time for the secretary to take on new and more demanding tasks. The responsibility level of many secretaries remains fixed throughout their working careers. These factors can negatively affect the secretary’s motivation and hence productivity. In the automated office of the future, repetitious and dull work is expected to be handled by personnel with minimal education and training. Secretaries will, in effect, become administrative specialists, relieving the manager they support of a considerable volume of work.”

Regardless of the women’s liberation movement of his day, the author could not see beyond the creation of a 2-tier system in which some women would continue to perform dull and unchallenging tasks, while other women would be “liberated” into a fulfilling new job category of “administrative support specialist.”  In this 1976 forecast, there are no women managers.

But this is not the only sociological factor the report missed. The most interesting sociological revolution of the office in the 1980s – and one missing from most (all?) accounts of the PC revolution – is what managers (male and female) did with their new word processing, communicating, calculating machine. They took over some of the “dull” secretarial tasks that no self-respecting manager would deign to perform before the 1980s.

This was the real revolution: The typing of memos (later emails), the filing of documents, the recording, tabulating, and calculating. In short, a large part of the management of office information, previously exclusively in the hands of secretaries, became in the 1980s (and progressively more so in the 1990s and beyond) an integral part of managerial work.

This was very difficult, maybe impossible, to predict. It was a question of status. No manager would type before the 1980s because it was perceived as work that was not commensurate with his status. Many managers started to type in the 1980s because now they could do it with a new “cool” tool, the PC, which conferred on them the leading-edge, high-status image of this new technology. What mattered was that you were important enough to have one of these cool things, not that you performed with it tasks that were considered beneath you just a few years before.

What was easier to predict was the advent of the PC itself. And the SRI report missed this one, too, even though it was aware of the technological trajectory: “Computer technology that in 1955 cost $1 million, was only marginally reliable, and filled a room, is now available for under $25,000 and the size of a desk. By 1985, the same computer capability will cost less than $1000 and fit into a briefcase.”

But the author of the SRI report could only see a continuation of the centralized computing of his day. The report’s 1985 fictional manager views documents on his “video display terminal” and the centralized (and specialized) word processing system of 1976 continues to rule the office ten years later.

This was a failure to predict how the computer that will “fit into a briefcase” will become personal, i.e., will take the place of the “video display terminal” and then augment it as a personal information management tool. And the report also failed to predict the ensuing organizational development in which distributed computing replaced or was added to centralized computing.

So regard the current projections of how many jobs will be destroyed by artificial intelligence with healthy skepticism. No doubt, as in the past, many occupations will be affected by increased computerization and automation. But many current occupations will thrive and new ones will be created, as the way work is done—and our lives progress—will continue to change.

Originally published on Forbes.com

Posted in Automation, Innovation | Leave a comment

Debating the Impact of AI on Jobs


 

aiJobs

One of this week’s milestones in the history of technology sets the tone to two centuries of debating the impact of machines and artificial intelligence on human work and welfare.

On February 27, 1812, Lord Byron gave his first address as a member of the House of Lords in a parliamentary debate on the Frame Breaking Act which made destroying or damaging lace-machines (stocking frames) a crime punishable by death.

Criticizing the proposed legislation advanced by the government in response to textile workers protesting the industrialization and mechanization of their craft,  Byron told his peers:

The rejected workmen, in the blindness of their ignorance, instead of rejoicing at these improvements in arts so beneficial to mankind, conceived themselves to be sacrificed to improvements in mechanism. In the foolishness of their hearts, they imagined that the maintenance and well doing of the industrious poor, were objects of greater consequence than the enrichment of a few individuals by any improvement in the implements of trade which threw the workmen out of employment, and rendered the labourer unworthy of his hire.

And, it must be confessed, that although the adoption of the enlarged machinery, in that state of our commerce which the country once boasted, might have been beneficial to the master without being detrimental to the servant; yet, in the present situation of our manufactures, rotting in warehouses without a prospect of exportation, with the demand for work and workmen equally diminished, frames of this construction tend materially to aggravate the distresses and discontents of the disappointed sufferers.

The first two cantos of Byron’s Childe Harold’s Pilgrimage came out on March 3, the same week as his maiden speech. He became an overnight superstar, says Fiona MacCarthy, his recent biographer, “the first European cultural celebrity of the modern age.”

Later, in canto 3, Byron wrote about “The child of love,–though born in bitterness, and nurtured in convulsion.” His daughter, born just before her parents separated, was Ada Augusta, later Lady Lovelace, promoter and explicator of Charles Babbage’s Analytical Engine. Unlike Babbage, she saw its potential as a symbol manipulator rather than a mere calculator and the possibilities for a much broader range of automation, beyond speeding up calculation.

Lovelace made clear, however, that what became to be known as “artificial intelligence,” depends on human intelligence and does not replace it. Rather, it augments it, sometimes throwing new light on an established body of knowledge but never developing it from scratch: “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.”

More generally, Lovelace argued thatIn considering any new subject, there is frequently a tendency, first, to overrate what we find to be already interesting or remarkable; and, secondly, by a sort of natural reaction, to undervalue the true state of the case, when we do discover that our notions have surpassed those that were really tenable.” In this she anticipated the oft-repeated Silicon Valley’s maxim (known as Amara’s Law): “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

In 1956, sociologist Daniel Bell wrote in Work and its Discontents about this typical reaction to a new technology in the context of the emerging computerization of work: “Americans, with their tendency to exaggerate new innovations, have conjured up wild fears about changes that automation may bring.” Bell thought that the predictions of factories operating without humans were “silly,” but anticipated that “many workers, particularly older ones, may find it difficult ever again to find suitable jobs. It is also likely that small geographical pockets of the United States may find themselves becoming ‘depressed areas’ as old industries fade or are moved away.”

As computerization or applying “artificial intelligence” to an increasing number of occupations has advanced, many commentators—and government officials—became increasingly alarmed. In 1964, President Johnson established the National Commission on Technology, Automation, and Economic Progress, saying:

Technology is creating both new opportunities and new obligations for us-opportunity for greater productivity and progress–obligation to be sure that no workingman, no family must pay an unjust price for progress.

Automation is not our enemy. Our enemies are ignorance, indifference, and inertia. Automation can be the ally of our prosperity if we will just look ahead, if we will understand what is to come, and if we will set our course wisely after proper planning for the future.

A year later, economist Robert Heilbroner wrote that “as machines continue to invade society, duplicating greater and greater numbers of social tasks, it is human labor itself — at least, as we now think of ‘labor’ — that is gradually rendered redundant.”

In 2003, Kurt Vonnegut wrote:

Do you know what a Luddite is? A person who hates newfangled contraptions… Today we have contraptions like nuclear submarines armed with Poseidon missiles that have H-bombs in their warheads. And we have contraptions like computers that cheat you out of becoming. Bill Gates says, “Wait till you can see what your computer can become.” But it’s you who should be doing the becoming, not the damn fool computer. What you can become is the miracle you were born to be through the work that you do.

In Race Against the Machine: How the Digital Revolution Is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy (2011), Erik Brynjolfsson and Andrew McAfee launched the current phase of the perennial debates over machines and jobs, and the automation and augmentation of human intelligence. They wrote: “In medicine, law, finance, retailing, manufacturing and even scientific discovery, the key to winning the race is not to compete against machines but to compete with machines.”

In 2018, economist Hal Varian argued that predictions based on demographics are the only valid ones:

There’s only one social science that can predict a decade or two in the future, and that’s demography. So we don’t really know where technology will be in 10 years or 20 years, but we have a good idea of how many 25- to 55-year-old people there’ll be in 10 years or 25 years… Baby Boomers are not going to be sources of growth in the labor force anymore because Baby Boomers are retiring…

…there will be fewer workers to support a larger group of nonworking people. And if we don’t increase productivity—which really means automation—then we’re going to be in big trouble in those developed countries…

So the automation, in my view, is coming along just in time—just in time to address this coming period of labor shortages…

The line in Silicon Valley is, “We always overestimate the amount of change that can occur in a year and we underestimate what can occur in a decade”… I think that’s a very good principle to keep in mind.

Originally published on Forbes.com

Posted in Artificial Intelligence, Race Against the Machine | Leave a comment