Sunday, March 19, 2017

Learn programming by gamifying? How about by reading?

On impulse I spent a couple of dollars on Amazon Marketplace to buy the out-of-print book Micro Adventure No. 1: Space Attack.  It's a "second person thinker" adventure novella: like old-school interactive fiction (i.e. text adventures),  it's written in the second person, as in "Although you'd like to rest for a few minutes, Captain Garrety insists that you get to your feet…"

In this short story aimed at pre-teens—the first in a series of at least 10, dating from the early 1980s—you must defend a space station from alien attack. But the interesting bit is that eight BASIC programs are embedded into the text of the story, as the page scan below shows.

The initial program just has to be typed in and run in order to reveal the "secret message" that will describe your mission to you. But as the book progresses, the programs require you to debug, analyze, or otherwise modify them as part of the story line. Some programs have bugs you must fix; in other cases you're asked to write a short program that automates a simple task, such as showing mappings between text characters and their ASCII codes (this is pre-Unicode, remember), in order to help "decode" intercepted enemy messages.


Of course, failing to do the puzzles can't block your progress in the story, because nothing stops you from just turning the page to keep reading. But this strikes me as an interesting way to get kids to learn how simple programs work. (I don't know how effective it was.) There is a "reference manual" at the end of the book explaining how the programs work, giving hints on solving the puzzles, and, of course, indicating which modifications must be made to allow the programs to run on different microcomputers. (Whereas code in a modern scripting language like Python will behave the same on all platforms, BASIC "dialects" differed enough across different computers that almost any non-toy program required changes to work with other computers' BASIC interpreters.)

An entire generation of programmers was first introduced to computing via the BASIC language. I've been looking for an example of an old geometry or physics textbook containing "Try it in BASIC" examples (we didn't use any of those at my middle school), but this seems a lot more fun. While I'm pretty convinced today's kids don't read books anymore, perhaps this approach could be adapted into an interactive format in which you actually play an adventure game but have to solve programming-related puzzles to make actual game progress. 

Monday, March 13, 2017

Book summary: America in the Seventies

Beth Bailey and David R. Farber. America in the Seventies (Culture America series). University Press of Kansas, 2004.

The premise of this book, as with similar books of observations of the American 70s by other writers, could be summed up as "the 70s is when the 60s were implemented." While the seeds of civil rights, gender equality, labor solidarity, etc. may have been sown in the 60s, the actual policies that put these ideas in practice happened during the 70s. At the same time, the US confronted a series of setbacks: Vietnam was not only a military embarrassment with enormous human costs, but a war that polarized the nation on moral grounds, with none of the moral clarity or national purpose of WW II; expanded government programs and higher-paid labor to meet the social demands of the 60s, combined with the replacement of American heavy industry with imported goods and the movement of labor-intensive production overseas, resulted in "stagflation" (inflation combined with economic stagnation); the Arab oil shocks painfully emphasized America's utter dependence on the whim of a small group of nations whose culture in some ways could not be further from our own. Richard Nixon's Watergate scandal made the public cynical that the government was not only incapable of resolving these economic woes, but lacked integrity and was not invested in the well being of the middle class. Social structures were challenged by movements involving gender roles, racial identity, and sexual identity, destabilizing social norms that were perceived to have anchored the country for decades and leaving many people casting about for their personal identity and purpose as well as confidence in their country. This toxic combination led to a nationwide anomie and alienation as expressed in gritty (and now-iconic) 70s movies like Taxi Driver, Looking for Mr. Goodbar, Midnight Cowboy, and Saturday Night Fever.

One very significant result of this existential crisis was the emergence of the New Right with the Reagan election of 1980. By latching on to the common denominators of dissatisfaction with government incompetence and corruption and the alienation bred by changing social roles, the New Right assembled a constituency of anti-tax activists, critics of "big government", and the religious right. Reagan and his successors used this mandate to gut the government altogether, following an existing conservative agenda that just needed dusting off after losing its social luster during the 60s.

The book is a collection of well chosen independent essays, each treating one of these social or economic upheavals in detail. As an academic myself, I approached it with some trepidation since academic writing can be ponderous and needlessly self-indulgent, but these are vigorously written and eminently readable by a non-expert like me. I commend the editors on their choices, though I would have enjoyed some connective material to introduce each essay or place it in the context of the larger themes, as is common in "edited by" collections. Notwithstanding, this is a highly readable and informative account of how the "me decade" of the 70s, in trying to in implement the social reforms of the 60s, ironically enabled the rise of the New Right and "greed is good" in the 80s.

Book summary: The Next America

Paul Taylor. The Next America: Boomers, Millennials, and the Looming Generational Showdown. PublicAffairs, 2016.

To paraphrase a famous scientist, the nice thing about data is that it doesn't matter whether you believe it or not. This book contains a tremendous amount of (summarized) data about the current and future demographics of the United States, gathered from both public sources (eg statistics published by the Bureau of the Census, the IRS, and other Federal agencies) and from one of the world's best-known nonpartisan survey-based research foundations (Pew).

I'd summarize the biggest takeaways as follows:

Generational attitude shift. The combination of immigration, intermarriage, and changing social morĂ©s among younger generations (the author identifies today's primary generational groups from oldest to youngest as Silent, Boomers, GenX, and Millennials) mean that the social attitudes of current and future voters lean overwhelmingly towards what most people would associate with "progressive values" or with the Democratic Party. In particular, as the Republican Party has tacked farther and farther to the right, the segment of the electorate receptive to their messages is shrinking and in fact dying. On the other hand, these younger-but-growing segments of the electorate have a much poorer voter-turnout record than their older and more conservative counterparts. This combination of elements has profound consequences for future elections.

Socioeconomic consequences of an aging population. The biggest coming "showdown" (to which the subtitle alludes) is the aging of the world's population. Japan, China, and some European nations will get there ahead of the US, in part because although birth rates are falling everywhere throughout the developed world, in the US that effect is partially offset by immigration, especially economically (since most immigrants arrive ready to work rather than newborn). But all these countries are rapidly approaching a point where fewer and fewer working people are supporting more and more seniors. (In Japan the ratio will approach 1:1 by about 2040 if current trends continue.) There is an unfortunate positive feedback loop in countries like the US where most legislation is made democratically: the older generations constitute a large and growing voter bloc to whom politicians must cater, and that bloc has been using its influence to appropriate a growing share of government wealth redistribution. In the US, Social Security and Medicare are basically on the ropes. At some level most of us know this, but the statistics and trends presented to quantify the situation are stark.

In other words: not only will the older and younger generations find themselves at odds economically on how to redistribute wealth, but their positions will be even farther apart because their social contexts are so different. As the author states in the introduction, "either transformation by itself would be the dominant demographic story of its era."

The book does a nice job of including enough charts and graphs inline when necessary to illustrate or back up a point, but relegating vastly more charts and tables to an Appendix you can browse at leisure or for more detail.

There is also a fascinating and well written appendix describing in high level terms the survey methodologies used by Pew and other professional research organizations, for those who think surveys are just a matter of asking some questions and tabulating answers. The appendix covers random sampling; a lay-person explanation of sampling error and reweighting; various biases including recency, confirmation, and self-selection; running meta-surveys to test the effect of different phrasings or presentations of the same questions; and much more. Indeed, this appendix is useful reading for anyone involved in doing rigorous surveys, whether they are interested in the rest of the book's content or not.


Whether it cheers you up, depresses you, or just causes you to raise an eyebrow may depend on where you fall on the political spectrum, but regardless of where you do, this is essential and well-reported information.

Book summary: The Wealth of Humans

Ryan Avent. The Wealth of Humans: Work, Power, and Status in the Twenty-first Century. St. Martin’s Press, 2016.

What follows is my summary of the book's main argument. There's a number of useful reviews on Amazon, including some written by very informed people who disagree with key points of the author's argument. The main objection seems to be that the author overstates the extent to which income inequality is an inevitable by-product of technological change (section 1 of my summary below), and understates the extent to which it is affected by politics/institutional decisions, e.g. infrastructure spending programs that can locally increase labor demand and social conventions to boost wages. 

Executive summary

In most economically free societies, the two mechanisms of wealth-sharing are work (employers shift wealth to employees by paying them) and redistribution (taxes pay for goods and services that may not be redistributed in proportion to how much you paid), and the society has a definition of who is "in" (eligible to participate in both mechanisms). This book asks: What happens to these mechanisms when increasing automation is squeezing the first, and those controlling the wealth are opposed to expanding the reach of the second?

Its overall responses are: (1) while it's true that policy everywhere has tipped to favor wealth concentration, the essential problem is structural; (2) As a result of this fundamental structural problem, most efforts to "create jobs" will run into problems that ultimately doom them; (3) therefore, for better or worse, some form of non-labor-based redistribution will become necessary (eg universal basic income).

1. Productivity-enhancing technology thwarts a balanced labor market 

Henry Ford's innovation was to de-skill individual roles to vastly decrease the cost and increase the per-employee productivity of making cars; precisely because the de-skilled jobs were tedious, he raised wages and coddled his employees to attract labor and reduce turnover, something he could afford to do because of their high productivity. But this scenario comes with 3 problems.

First, the high productivity makes it affordable to pay higher wages, but workers in low-productivity industries such as education and healthcare that suffer from Baumol's "cost disease" (it costs about the same to educate 1 student or care for 1 patient as it ever has) are in the same labor market, so their wages must rise *despite* stagnant productivity, thereby increasing the cost to the consumer of purchasing those goods or services. That is, wealthy companies can afford to pay employees more because of the employees' much higher productivity, so that most income inequality is due to wage gaps *between* firms/sectors rather than within them.

Second, productivity-enhancing de-skilling paves the way for complete automation of those jobs, so the benefit to low-skill workers is short-lived.

Third, since higher productivity leads to a labor glut even before automation takes over, it pushes wages down. This is bad because while the effective price of some goods also falls due to that productivity (cars, cell phones), the effective price of others doesn't, either because supply is scarce (housing) or because they suffer from Baumol's cost disease of stagnant productivity (education, healthcare).

This is an example of how "job creation" systems can end up working against themselves. Future employment opportunities will likely satisfy at most 2 of the following 3 conditions ("employment trilemma"): (1) high productivity and wages, (2) resistant to automation, (3) potential to absorb large amounts of labor. To see the dynamic, consider the solar-panel industry. Increased productivity in manufacturing solar panels has caused them to drop in cost, creating a large market for solar panel installers, a job resistant to automation (meets criteria 2 and 3). But that same increased productivity means most of the cost of acquiring solar is the installation labor, limiting wage growth for installers (fails criterion 1). As another example, consider healthcare. As technology increases the productivity of (or automates) other aspects of care delivery, healthcare jobs will concentrate in non-automatable services requiring few skills besides bedside manner and the willingness to do basic and often unpleasant caregiver tasks. As a third, consider artisanally-produced goods, whose low productivity is part of their appeal (meets 1 and 2). But the market for them is limited to the small subset of people who can afford to buy them (fails 3).

Can education help? Higher educational attainment is still key to high wages, but not to high wage *growth*. The level of education required for that has been climbing higher and higher, putting it beyond the economic (and possibly intellectual) reach of most people, yet those are precisely the credentials needed to participate in the most lucrative parts of the economy. The displaced workers "trickle down" the skill-level chain and depress wages even higher in the wage hierarchy. So improving education, while a good idea, won't help people in poor countries as much as simply moving them into a rich country to work in that economy.

2. Hence, social capital is increasingly key to successful companies…

Since WW2, developed-nation economies have increasingly "dematerialized" to where most of the value in goods being produced was in knowledge-worker contributions, rather than physical manufacturing or the labor therein. (iPhones and cars are built overseas, yet most of their value is in design and software, which aren't outsourced.) Increasingly, the "wealth" of a company is not in its capitalization or even the material output of its employees, but its "culture" -- its way of absorbing, refactoring, and acting on information in a value-added way that is difficult to replicate and produces a product customers want to buy.

(This is also why cities are resurgent -- they permit a dense social/living fabric that promotes evolution of social capital, and the larger/denser the city, the more productive it becomes because of this effect, supporting high levels of specialization and social networks that facilitate labor mobility. The demand results in high housing costs, but NIMBYs oppose building more housing because even though the benefits would be spread over the whole city, the costs would be concentrated in their neighborhood.)

By definition, culture is a group phenomenon, not a set of rules handed down by a boss. Social capital cannot be exported like material goods; all you can do is try to create (or impose) conditions under which it can develop by allowing the free flow of ideas and labor (ie, the people in whose heads social capital lives), as the EU is trying to do within Europe. This is troubling for developing economies whose societies lack social capital.

Hence, China, having spent a fortune to create physical infrastructure to improve worker productivity, has reached diminishing returns: further productivity improvements must now come from "deepening" the workers' social capital, which has been wrecked by decades of cultural mismanagement by a totalitarian regime. Similarly, India's outsourcing boom and China's hyper-rapid industrialization occurred because technology allowed them to temporarily bypass the difficult step of building social capital, by "biting off" chunks of activity taking place in richer economies: India hosting outsourced call centers, or China jumping into a global supply chain established by rich economies and uniquely facilitated by the digital economy, in both cases offering labor at lower cost. But this era is ending: other countries can play the same trick (eg Indonesia as the new China, depressing Chinese wages), automation is coming, and the relative advantage to outsourcing decreases as products become more information-centric. (Though note that while "reshoring" is happening, it's not creating more jobs: Tesla would rather pay a few highly skilled engineers to oversee an automated assembly plant than pay lots of low-skilled factory workers to build something manually and less reliably elsewhere.)

It used to be thought that poor countries were poor because they lacked financial capital, but it's now clear that they can build factories without resulting in good social capital (India, China). Indeed, highly-educated workers in poor countries become more productive when they move to rich countries, suggesting it's the country's social capital that is lacking.

3.   …yet the benefits of social capital don’t accrue to those who create and embody it 

Yet as important as social capital is, when a worker leaves a company, his knowledge of that company's "culture" is generally not useful at a new firm, so he has little leverage (though this is somewhat counterbalanced by the pressure to not have *most* workers quit, which would destroy the culture). Conversely, a chief executive is harder and costlier to replace, so has more leverage as an individual. Herein lies the problem: "social capital" is in the collective heads of individual workers, but its benefits flow disproportionately to the owners of financial capital. A corollary is that the efficiency gains achieved by fluid (ie non-unionized) labor markets haven't been redistributed to the workers whose bargaining power was sacrificed to achieve those efficiencies. Marx predicted that that dynamic was unsustainable, and the society would collapse because either the workers would revolt and upend the government and the social norms it curates, hence destroying the wealth for everyone, or that the wealth-owners would asymptotically reach a point where no further wealth could be generated and harvested so they'd start fighting each other over the fixed amount of wealth, again destroying the society. Piketty notes that the 2 world wars did a lot to disrupt this downward slide because wars, taxation, inflation, and depression destroyed many of the superconcentrated fortunes made in the industrial age, but as noted above, the change was temporary.

The consequence of this structural problem is that some form of non-labor-based redistribution is likely to be the only nonviolent way forward. This path has at least two challenges. One is that the act of doing work has other benefits -- agency, dignity, reinforcement of socially-useful values -- that would be lost; although surveys show that people saddled with extra free time due to weak job markets tend to spend it sleeping or watching TV, ie, at leisure. A second challenge is that such "highly redistributive" societies tend to emerge in ethnically/nationally coherent political units, and motivate the society to draw a tight boundary around itself. E.g. Scandinavian countries have generous welfare states that make them desirable to immigrate into, but as a result the load on the welfare system generated by lots of immigration is tearing at the seams of their welfare economies. That is, we can't expect rich liberal countries to throw open their borders heedlessly when the potential pool of immigrants dwarfs those working to generate the wealth that is redistributed.

Saturday, March 11, 2017

Book summary: From Betamax to Blockbuster



Joshua M. Greenberg, From Betamax to Blockbuster: Video Stores and the Invention of Movies on Video. Cambridge, MA: MIT Press, 2016.

Summary: Although the VCR was originally positioned as a device for time-shifting TV, its dominant use quickly became the viewing of pre-recorded content. The book tells the story of that evolution, and how it affected both the medium and the content: how the mismatches between the technology of the VCR/TV vs. theaters affected movie viewing, the social and commercial constructs such as video rental stores that sprang up around the experience, and the cultural shift in the perception of what, exactly, a "movie" was and what the experience of "watching a movie" came to mean. Video rental stores, which provided the intermediary that brought these mismatched perspectives together, did such a good job that they ultimately rendered themselves obsolete.

Technological prehistory. In 1969 Sony invented the U-Matic, the first cassette-based color videotape recorder and ancestor of the Betamax, which could record up to an hour of video in the NTSC (American analog TV) format. Up to then, reel-to-reels with low-density tape had been used for "kinescoping" a TV broadcast: a show would be shot on the East Coast, a kinescope pointed at a monitor to record the playback, and then the film would be developed and rebroadcast around the country. Selling the U-matic was hard since there was no "software"; initial attempts focused on getting educational companies to convert their materials to the format for in-school use; in practice, adult video arcades probably did more to launch the industry, replacing "film loops" with cassettes.

Sony positioned the 1975 Betamax (price: $1,295) as a device for "time-shifting TV", hence underestimated consumer demand for blank cassettes. In addition, Betamax tapes could only record 1 hour of video. For the first 2 years of Betamax's existence, the only prerecorded tapes users could legally buy were public domain films or pornography. Japanese competitor JVC (Japan Victor Corporation) came up with its own incompatible format called VHS, which could record two full hours albeit with slightly lower quality than Betamax. JVC also triggered a price war by licensing the rights to manufacture VHS equipment to any manufacturer, whereas Sony was the exclusive manufacturer of Betamax equipment. One VHS manufacturer, Matsushita (Panasonic), struck a deal with RCA to manufacture a unit that would allow 4 hours of recording on VHS tape at substantially lower quality, allowing sports events to be captured in their entirety. Sony (and most experts) insisted that Betamax's recording quality was superior, but that seemed less important to consumers than longer recording time and lower-priced equipment. Sony eventually responded to these technical and business challenges with improvements to Betamax, but by then VHS had basically won the format war with consumers.

Late 1970s: early adopters lead to the birth of a consumer-facing business. Early videophiles (usually white males, 21-39) would record and archive entire TV miniseries (or better, movies) and even edit out commercials to make the experience closer to viewing a movie. They would copy and trade tapes, by mail or in person at informal gatherings; they formed nationwide networks supported by amateur magazines, phone numbers, and mailinglists used to distribute photocopies of TV Guide listings from other regions.

A pilot test of a third format called Cartrivision, which could hold 2 hours of video and was used to distribute "classic" movies, failed due to poor implementation: technical problems made the tapes disintegrate prematurely and damage the players; the tapes could not be rewound except by a dealer, to ensure that renting only allowed a single viewing, which angered users (a necessary concession to movie studios, who refused to license movies unless they could closely control the viewing experience); and the tapes were delivered by mail, taking days to arrive. Indeed, when Sony released the Betamax in 1975, chairman Akio Morita had tried to strike a deal with Paramount to distribute movies, but again failed because the studio feared losing control of the user's viewing experience. In essence, attempts to create a movie-distribution market were hobbled by tying the studio-imposed constraints of distribution to the technology used. VHS sidestepped most of that and became the dominant format, so it was effectively poised to become the vehicle for distributing movies to consumers; but the studios were still resistant, seeing it as a threat that would cannibalize their existing business model of distributing movies to theaters.

Nonetheless, some entrepreneurs saw a market for media in the home, and started making inroads:
  • Noel Gimbel, owner of the Chicago electronics store Sound Unlimited, thought he could stimulate VCR sales by selling public-domain movies on tapes. Later, he would convince Paramount that that studio's ill-fated exclusive with Fotomat for distributing movies was failing, as video store owners were simply distributing bootlegged copies.
  • Don Rosenberg, who worked for a record distributor, had the idea of going door-to-door convincing music retailers to expand into video, which was tricky because the distribution model for video was based on the model for the appliances with which blank tapes were sold—retailers paid for stock and sold it. In contrast, music was like books—dealers got paid only when customers bought something, and had 90 days to return unsold goods. 
  • Entrepreneur Andre Blay is credited with kickstarting the media-in-the-home industry by making successful deals with 20th Century Fox to establish a rental membership plan for movies. His company Magnetic Video did video duplication and distribution for studios, and he had seen that studios licensed 20-minute "digests" of movies for distribution on 8mm tape; why couldn't they make even more by licensing full-length movies? Fox ultimately acquired Magnetic Video as Fox Home Entertainment, and other studios followed suit and set up their own Home Entertainment divisions. This forced the hand of distributors and retailers in the music industry, and the home entertainment retail industry became a hybrid of the previous music model and the new video rental model.
  • Because of the questionable moral standing of pornographic video, the societal stigma of going into a porno theater, and in some cases its ties to organized crime, pornographers were more willing to embrace risky distribution strategies. Porno was instrumental in launching the home media industry. (Porno theaters showing bootlegged tapes were paid a visit by organized crime.)
Slowly the material nature of the cassette began to give way to the abstract nature of "buying entertainment", as video stores started stocking shelves with empty boxes or box covers while keeping the tapes stored elsewhere (usually for security reasons), and the VCR itself, originally intended as the focus of consumer attention for time-shifting TV, became an incidental artifact used to play back movies. Early video stores were often staffed by movie buffs with no retail experience who just enjoyed being around movies and offering personalized advice to customers, and customers offering advice to each other while browsing the shelves; "going to the video store" became a social ritual as much as watching the movie itself. Local stores hence became social spaces "like bars without alcohol" (consumption junctions, in the language of media theory).

The maturation of the rental industry: franchisization and disintermediation. By the early 1980s, the nature of the rental industry changed as video rental took off. Early video-rental stores took advantage of the "first purchase" rule that applies to books, wherein the original purchaser can do whatever they want with their copy of a video, including rent it an unlimited number of times with no royalty payments to the studio; in retaliation studios began licensing "rental-only" copies at much higher cost, and uneasy truces were eventually reached as a result of retailers and distributors forming advocacy organizations that could negotiate licensing and royalty terms with the studios. Still, with rapidly growing consumer demand for renting movies, self-styled entrepreneurs with no retail experience wanted to open video stores; some successful video chain owners even had a side business providing consulting or "turnkey setup" of your own new video-rental business, most of which were no longer staffed by movie buffs as in the early days. The transformation was complete when entrepreneur Wayne Huizenga saw the first Blockbuster Video store in Florida: clean and bright, family-friendly (no adult-video room in back), prominently displayed children's programming section, and the accoutrements of the movies (popcorn, candy, etc.)—something a few independents had started to do, but became a formula with Blockbuster. The chain reached such efficiency that it could load an 18-wheeler with everything necessary (furniture, tapes, electronic equipment) to turn an empty storefront into an operating retail location within 24 hours.

What is a movie? The spread of VCRs challenged the Platonic ideal of "the movie". Previously the movie as artifact had been wedded to both the technology of the theater (albeit widely varying) and its cultural setting. TV had a different commercial milieu (embedded advertising; FCC constraints and scheduling constraints that led to often heavy "editing for TV"), a different cultural one (sitting in the dark with strangers vs. sitting in living room with family/friends; pausing to go to the bathrrom), and a different technological one (1.33 aspect ratio vs. 2.35 widescreen; mono or stereo vs. surround audio). The introduction of "letterbox" VHS tapes was bumpy because for some consumers watching movies on TV was framed as watching TV, which was supposed to fill the screen, whereas for others it was framed as watching movies, in which case it was a more "movielike" experience. (Ironically, the 1.33 aspect ratio of TVs was chosen to imitate the early movie industry; 2.35 was adopted later when the movie industry perceived itself as under threat from TV and in need of differentiation.) Similarly colorization: some actors, notably Cary Grant, evaluated it in terms of its matching the physical sets on which filming had occurred, whereas some directors and many critics blasted it because it distorted their only experience of the movie, which had been watching it in B&W.

Finally, the lack of social stigma around "being unable to program my VCR" (unlike, say, admitting you were unable to operate a phone) suggested that the act of programming it (i.e. time-shifting TV programs) was no longer central to the VCR's technological frame.

Conclusion. Video stores were the "mediators" between two cultures in many different ways. Studios weren't used to distributing movies on tape, or comfortable with a rental market; but that's what consumers wanted. The commercial models around distribution and retail didn't match consumers' expectations. TV technology didn't match theater technology as a way to view a movie, but along with consumers' evolving perception of what "watching a movie" meant, at once embracing "theater accoutrements" like candy and popcorn in video stores and confounding them by changing the social interactions around movie-watching, video stores were there to mediate the transition and bring consumers and producers together. Ironically, they were so successful at doing so that they have been disintermediated:
  • Technologically, VCRs gave way to DVDs. Although DVDs provide higher picture quality, they did not initially enable the amateur market (direct-to-video indie films, home movies, etc.) in the ways the VCR did, which was critical to the cultural rise of video stores. (Today indie filmmakers can shoot direct to digital and distribute via YouTube, but that wasn't true when DVDs arrived in the early 2000s, and was barely true in 2006 when DVD movie sales first outsold VHS movie sales.) In addition, DVDs "demystify" movies by bundling making-of, interviews, etc. with the feature itself, something completely alien to the theater experience, suggesting that the transformation of consumers' perception of "watching a movie" is complete.
  • independent video stores gave way to chains (Blockbuster, Hollywood Video), which themselves went out of business as direct-from-distributor services like Netflix arose.
The overall lesson may be: without intermediation, new cultural phenomena such as the video-movie revolution could not happen; but once underway, the intermediaries themselves become redundant. (I wonder if a similar argument could be made for retail computer sales—independent stores gave way to national chains like Computerland, then to computers being sold directly in office-supply stores like Office Depot as the computer became mainstream, then eliminated in favor of direct-from-distributor online ordering.)

Tuesday, March 7, 2017

The CRT is dead, long live the CRT

I am a child of the 80s (and a little bit the 70s), and as a youngster I spent many, many quarters in arcade video games. (Tempest was among my favorites that I was good at.) It might be hard for today’s young adults to imagine the appeal of paying-per-game to play a game that lasted only a few minutes, had to be played standing up (usually), and was located in a pizzeria, bar, movie theater, or video arcade. But the first highly successful home gaming console (the Atari 2600, which sold over 40 million units during its 14-year lifetime) didn’t arrive until 1977, and while arcade games started rapidly improving after the release of Taito’s Space Invaders (1980), home games’ graphics and sound lagged far behind arcade hardware well into the late 1980s, even though Atari and others aggressively licensed the rights to produce home versions of popular arcade games. A typical arcade cabinet game might retail for $4,000, vs. around $200 for a home console. (Not to mention that going to the arcade was a social event. You know, that's the kind of event where you get together with real people to have real pizzas and real interactions, rather than "interacting" with them online.)

Today arcade cabinet games have an ardent following among retrocomputists (e.g. me), collectors, and nostalgists. But perhaps not for long: outside of this niche market, there’s virtually no demand for manufacturing CRT displays anymore, and they are surprisingly labor-intensive to manufacture, as this 5-minute video shows. In particular, few 29-inch “arcade grade” CRTs remain in the world, and the capacity to make or repair them is basically gone.

Without arguing whether new display technologies (plasma, LCD, LED) are better or worse than analog CRTs, it is certainly true that authors of older games had to work around (or more creatively, work with) the color-mixing and display constraints of analog CRTs, which are quite different from those of true discrete-pixel displays. This was especially true when designing games for home game consoles designed to connect to TV sets: these had the additional constraint that the video signal fed to the TV had to follow the somewhat quirky NTSC standard for analog color video. Famously, the Apple II video circuitry exploits idiosyncrasies of NTSC to produce high-resolution (at the time) graphics for a low (at the time) cost, at the expense of being very tricky to program. The fascinating book Racing the Beam recounts how both the console designers and game designers for the Atari 2600 leveraged the physical and electrical properties of NTSC color to create appealing games on exceedingly low-cost (for its time) hardware, even creating a custom chip to deal with some of the quirks of NTSC (the TIA or Television Interface Adapter, code-named “Stella”). And indeed, while Atari 2600 emulators are still popular and original 2600 hardware can be connected to modern LCD and plasma screens, the color effect is subjectively different from viewing it on old-school analog sets. In contrast (get it? <groan/>), although arcade video games also used large (29”) CRT displays, they weren’t bound by the signal limitations of NTSC, so they could produce graphics far superior to what home gamers could view even on comparably sized TV sets.

June 12, 2009, was the last day for all US broadcast television stations to switch from analog (NTSC-encoded) broadcasting to digital broadcasting. On that day, NTSC effectively became a dead standard. Now, the hardware that was so ubiquitously associated with it—CRTs—is on a path to meet the same fate. Before it’s gone, get yourself to a “classic games” arcade and take a step back to when the best gaming graphics and sound were found in pizzerias, bars, and candy stores.