Why You Might Soon Feel More Secure about Insecure Software

BY SCOTT BERINATO
http://www.idg.net/go.cgi?id=775870

Application security—until now an oxymoron of the highest order, like “jumbo shrimp”—is finally changing for the better.

IF SOFTWARE WERE an office building, it would be built by a thousand carpenters, electricians and plumbers. Without architects. Or blueprints. It would look spectacular, but inside, the elevators would fail regularly. Thieves would have unfettered access through open vents at street level. Tenants would need consultants to move in. They would discover that the doors unlock whenever someone brews a pot of coffee. The builders would provide a repair kit and promise that such idiosyncrasies would not exist in the next skyscraper they build (which, by the way, tenants will be forced to move into).

Strangely, the tenants would be OK with all this. They’d tolerate the costs and the oddly comforting rhythm of failure and repair that came to dominate their lives. If someone asked, “Why do we put up with this building?” shoulders would be shrugged, hands tossed and sighs heaved. “That’s just how it is. Basically, buildings suck.”

The absurdity of this is the point, and it’s universal, because the software industry is strangely irrational and antithetical to common sense. It is perhaps the first industry ever in which shoddiness is not anathema—it’s simply expected. In many ways, shoddiness is the goal. “Don’t worry, be crappy,” Guy Kawasaki wrote in 2000 in his book, Rules for Revolutionaries: The Capitalist Manifesto for Creating and Marketing New Products and Services. “Revolutionary means you ship and then test,” he writes. “Lots of things made the first Mac in 1984 a piece of crap—but it was a revolutionary piece of crap.”

The only thing more shocking than the fact that Kawasaki’s iconoclasm passes as wisdom is that executives have spent billions of dollars endorsing it. They’ve invested—and reinvested—in software built to be revolutionary and not necessarily good. And when those products fail, or break, or allow bad guys in, the blame finds its way everywhere except to where it should go: on flawed products and the vendors that create them.

“We’ve developed a culture in which we don’t expect software to work well, where it’s OK for the marketplace to pay to serve as beta testers for software,” says Steve Cross, director and CEO of the Software Engineering Institute (SEI) at Carnegie Mellon University. “We just don’t apply the same demands that we do from other engineered artifacts. We pay for Windows the same as we would a toaster, and we expect the toaster to work every time. But if Windows crashes, well, that’s just how it is.”

Application security—until now an oxymoron of the highest order, like “jumbo shrimp”—is why we’re starting here, where we usually end. Because it’s finally changing.

A complex set of factors is conspiring to create a cultural shift away from the defeatist tolerance of “that’s just how it is” toward a new era of empowerment. Not only can software get better, it must get better, say executives. They wonder, Why is software so insecure? and then, What are we doing about it?

In fact, there’s good news when it comes to application security, but it’s not the good news you might expect. In fact, application security is changing for the better in a far more fundamental and profound way. Observers invoke the automotive industry’s quality wake-up call in the ’70s. One security expert summed up the quiet revolution with a giddy, “It’s happening. It’s finally happening.”

Even Kawasaki seems to be changing his rules. He says security is a migraine headache that has to be solved. “Don’t tell me how to make my website cooler,” he says. “Tell me how I can make it secure.”

“Don’t worry, be crappy” has evolved into “Don’t be crappy.” Software that doesn’t suck. What a revolutionary concept.

Why Is Software So Insecure?
Software applications lack viable security because, at first, they didn’t need it. “I graduated in computer science and learned nothing about security,” says Chris Wysopal, technical director at security consultancy @Stake. “Program isolation was your security.”

The code-writing trade grew up during an era when only two things mattered: features and deadlines. Get the software to do something, and do it as fast as possible.

Networking changed all that. It allowed someone to hack away at your software from somewhere else, mostly undetected. But it also meant that more people were using computers, so there was more demand for software. That led to more competition. Software vendors coded frantically to outwit competitors with more features sooner. That led to what one software developer called “featureitis.” Inflammation of the features.

Now, features make software do something, but they don’t stop it from unwittingly doing something else at the same time. E-mail attachments, for example, are a feature. But e-mail attachments help spread viruses. That is an unintended consequence—and the more features, the more unintended consequences.

By 1996, the Internet, supporting 16 million hosts, was a joke in terms of security, easily compromised by dedicated attackers. Teenagers were cracking anything they wanted to: NASA, the Pentagon, the Mexican finance ministry. The odd part is, while the world changed, software development did not. It stuck to its features/deadlines culture despite the security problem.

Even today, the software development methodologies most commonly used still cater to deadlines and features, and not security. Software development has been able to maintain its old-school, insecure approach because the technology industry adopted a less-than-ideal fix for the problem: security applications, a multibillion-dollar industry’s worth of new code to layer on top of programs that remain foundationally insecure. But there’s an important subtlety. Security features don’t improve application security. They simply guard insecure code and, once bypassed, can allow access to the entire enterprise.

In other words, the industry has put locks on the doors but not on the loading dock out back. Instead of securing networking protocols, firewalls are thrown up. Instead of building e-mail programs that defeat viruses, antivirus software is slapped on.

When the first major wave of Internet attacks hit in early 2000, security software was the savior, brought in at any expense to mitigate the problem. But attacks kept coming, and more recently, security software has lost much of its original appeal. That—combined with a bad economy, a new focus on national security, pending regulation that focuses on securing information and sheer fatigue from the constant barrage of attacks—spurred many software buyers to think differently about how to fix the security problem.

In addition, a bevy of new research was published that proves there is an ROI for vendors and users in building more secure code. Plus, a new class of software tools was developed to automatically ferret out the most gratuitous software flaws.

Put it all together, and you get—ta da!—change. And not just change, but profound change. In technology, change usually means more features, more innovation, more services and more enhancements. In any event, it’s the vendor defining the change. This time, the buyers are foisting on vendors a better kind of change. They’re forcing vendors to go back and fix the software that was built poorly in the first place. The suddenly efficacious corporate software consumer is holding vendors accountable. He is creating contractual liability and pushing legislation. He is threatening to take his budget elsewhere if the code doesn’t tighten up. And it’s not just empty rhetoric.

Says Scott Charney, chief security strategist at Microsoft, “Suddenly, executives are saying, We’re no longer just generically concerned about security.”

Berinato’s scrutiny of software security continues in “Forcing the Vendors to Get Secure”.

find and grep for text in /*

find . -name * -print | xargs grep “Then fax it to”
or
find . -name ‘*’ -print | xargs grep “development”

List file
find . -name * | xargs grep -l “secure.sal”
or for awkward specials use the single quotes eg:
find . -name * | xargs grep -l ‘$’

Disk space used
du scr /usr/local/home/httpd/vhtdocs | sort -n

Two new sites I’ve just opened accounts with

[b”>sms2email.com[/b”>
Allows you to receive a text message to a number with a keyword in the message and have the contents emailed to you or define you own http POST script – how cool is that?

(The http POST was an undocument feature until I opened the account and was a hell of shock – developer paradise!)
https://www.sms2email.com/

[b”>Half decent UK Virtual Web Hosting[/b”>
Surely – that’s impossible? Value for money, stunning functionality and tech support.
I dunno – but it sure looks cool to me.
Await the QOS report in a few months.
http://www.gradwell.com/

K prgramming language

A Shallow Introduction to the K Programming Language (Columns)
By jjayson
Thu Nov 14th, 2002 at 05:58:07 AM EST

About two years ago I was introduced to a programming language that I really didn’t like: it didn’t have continuations, I didn’t see any objects, it had too many operators, it didn’t have a large community around it, it was strange and different, and it looked like line noise, like Perl, and I don’t like Perl. However, I gave it a try.

I had to learn that continuations may not be there, but first-class functions are; it may not have a normal object system, but that is because the language doesn’t need it and gets it power by cutting across objects; all the operators are the functions that make up its standard library; it’s community may not be large, but it is incredibly intelligent; it only looks strange until you understand its concepts; and well, it will always look like line noise, but you will stop caring because this also make the concise code easier to read. K has since become my language of choice.


http://www.kx.com/

Big Money for Cyber Security (US tax Dollars)

Big Money for Cyber Security (Technology)
By imrdkl
Wed Nov 13th, 2002 at 02:58:20 PM EST

This week, House Bill 3394, the Cyber Security Research and Development Act, passed in the Senate, and is now headed for the White House, where the President is expected to sign it without delay. Almost a billion dollars are allocated by the bill, for scholarships, grants and research on the topic of Cyber Security.

While much of the existing knowledge and many of the working implementations in this area have been developed over the years as part of existing Free Software implementations, the government has found that there simply is not enough funding, or talent, behind those efforts. They’re quite concerned about vulnerabilities in the critical infrastructure of the US, including telecommunications, transportation, water supply, and banking, as well as the electric power, natural gas, and petroleum production industries, all of which rely significantly upon computers and computer networks for their operation.

The bill itself may be studied at the Library of Congress, using their search engine, or directly. This article will present an overview of the exciting and profitable opportunities which will soon be available to researchers with an interest in Cyber Security.

——————————————————————————–

Some of the other important findings of the bill include:

The US is not prepared for coordinated cyber attacks which may result from war
Federal investment in computer and network security research must be increased to decrease vulnerability, expand and improve the “pool” of knowledge, and better coordinate sharing and collaboration.

African-Americans, Hispanics, and Native Americans comprise less than 7 percent of the information science workforce, and this number should be increased.

I consider the second finding particularly interesting. Given the history of security research, when the bill finds that better sharing and collaboration is necessary, one might conclude that the government intends to support the continued and expanded efforts of Open Source software, to accomplish the task. While there are certainly closed implementations for security, it’s just “commonsensical” to put the money behind the open and freely-available efforts which are already shared, and collaborated upon.

In general, the National Science Foundation (NSF), which will be the director of the foundation which distributes the funds, will be directed to award monies for research and study on the following topics, during the next five years:

authentication, cryptography, and other secure data communications technology
computer forensics and intrusion detection
reliability of computer and network applications, middleware, operating systems, control systems, and communications infrastructure
privacy and confidentiality
network security architecture, including tools for security administration and analysis
emerging threats
vulnerability assessments and techniques for quantifying risk;
remote access and wireless security
enhancement of law enforcement ability to detect, investigate, and prosecute cyber-crimes, including those that involve piracy of intellectual property.
Now, that’s certainly a broad list. It introduces significant possibilities for improving and enhancing existing implementations, as well as finding new and improved techniques. The applications which will be considered are to be evaluated on a “merit” basis, and may be undertaken by universities and other non-profit institutions, as well as partnerships between one or more of these institutions along with for-profit entities and/or government institutions.

Criteria for acceptance of any proposal submitted will be based upon:

the ability of the applicant to generate innovative approaches
the experience of the applicant in conducting research
the capacity of the applicant to attract and provide adequate support
the extent to which the applicant will partner with government laboratories, for-profit entities, other institutions of higher education, or nonprofit research institutions, and the role the partners will play in the research undertaken by the Center.
It seems a fair question to ask, why is the amount of “partnership” important? If the end result of the research is to be “shared and collaborated”, then perhaps the amount of partnership is not so critical as the first three criteria. In any case, there’s soon to be a lot of new money for study and work related to computer security. The application process itself, while not yet established, has provisions for each of the distinct topics mentioned previously, both for graduate study and training, as well as undergraduate internships and programs.

Have you an interest in Cyber Security? What programs or software could be improved, and how would such a large capital infusion for research affect these projects? What are the political ramifications of the government getting involved with the projects, either directly or indirectly? And what about the shortage of minorities in the profession? What can be done to encourage young people in general, and African-Americans, Hispanics, and Native-Americans in particular to study and learn about Cyber Security?

Other Coverage: UPI, InfoWorld and GovExec

Nickel Exchange: P2P Micropayments

Nickel Exchange: P2P Micropayments (MLP)
By higinx
Tue Nov 12th, 2002 at 12:14:34 PM EST

Many companies have tried to implement micropayment solutions before, but none have really succeeded. The Nickel Exchange introduces a completely new approach to micropayments that tries to address the flaws we’ve seen in previous systems. And best of all, it’s a free service.

http://www.ginx.com/nx/

Just checked the site July 14th 2003
They’ve paused the site ATM due to no one reaching 100 units 🙁

Next best that I can see is :

http://www.centipaid.com/

July 16th 2003
http://www.bitpass.com/

July 22
http://www.amazon.com/webservices/

“We are almost ready to kick off the beta for our payment system. The payment system will allow visitors to your site to use their Amazon account to pay you for any product or service. You can also offer subscriptions and controlled access to content. You will be able to verify the status of any transaction to make sure that the user has not rescinded it. We will provide you with a base-level API and you can construct your business logic on top.”

http://www.dashes.com/anil/index.php?archives/006765.php

Blogs referral marketing – bastards

Spam meets blogs (MLP)
By kpaul
Mon Oct 28th, 2002 at 07:33:29 AM EST

Michelle Delio at Wired has an interesting article (When the Spam Hits the Blogs). In it, she explores another somewhat new phenomena in the blogosphere. According to the article, spammers have begun hitting sites furiously to get links on a lot of sites’ backlink lists.
….

When the Spam Hits the Blogs – Michelle Delio – Wired original article.

How to present ReferrerLinking on your web site

Just read two books that I think most people in the US and UK should read

Stupid White Men … and Other Sorry Excuses for the State of the Nation! by Michael More

Now the More book started off fine, but lost it 3/4 way through.

It’s an interesting book, but you have to be careful, he neglects to say the Bin Laden’s family disowned Osama years ago. He fails to point out what would have happened if the Bin Laden family hadn’t been evacuated from the US after 9/11 – a lynch mob, that’s what. A few family members killed in the heat of the moment who hadn’t anything to do with it.

A great book spoilt by a patriotic ending.

The book is important in my opinion because we tend to follow the US. It’s on the best sellers and justifiably so.

http://www.google.com/search?hl=en&ie=ISO-8859-1&q=Stupid+White+Men

Closely followed by :

War on Iraq by Scott Ritter & William Rivers Pitt

This book should be PDF’d and developed as a virus and sent to the electorate in the US and Britain that have email. (With a read receipt back to their conscience.)

This book is just a truely shocking, scary book.

http://www.google.com/search?hl=en&lr=&ie=ISO-8859-1&q=war+on+iraq+ritter+pitt

Sendo ditches closed source Micky$oft

http://www.sendo.co.uk/news/newsitem.asp?ID=61

SENDO CHOOSES NOKIA’S SERIES 60 PLATFORM FOR ITS SMART PHONES
Thu Nov 7 2002
Sendo, a British mobile phone manufacturer, today announced that the company has decided to license Series 60 Platform from Nokia for its smart phone category. The Series 60 is a software platform for feature- and application rich smart phones that Nokia licenses to mobile handset manufacturers. The platform is optimised to run on top of the Symbian OS. Sendo joins as the newest member to the Series 60 licensing community with Matsushita, Samsung, Siemens and Nokia.

“Earlier this fall we reviewed our smart phone strategy. While our mission of providing customers with feature-rich and ubiquitous devices remains unaltered, seeing that the Series 60 fully embraces both our mission and the new strategy we decided to approach Nokia,” said Hugh Brogan, Chief Executive Officer of Sendo Holdings Plc. “The platform utilises open standards and technologies, such as MMS and Java , jointly developed by the industry. The platform is robust, yet uniquely flexible, bringing great benefits to licensees, operators, developers and consumers.”

“We welcome Sendo, a pioneer in smart phone development, to join our Series 60 community. We see that a combination of Sendo’s technical expertise and growing market presence will bring significant contribution to the mobile market with Series 60 devices. Interoperable solutions that are built on open and common industry standards are proving to be the winning formula in meeting demands of business users and consumers alike,” said Niklas Savander, Vice President and General Manager, Nokia Mobile Software.

Nokia licenses Series 60 Platform as a source code. The model enables licensees to contribute to the development of the platform while fully executing their individual business strategy, brand and customer requirements in fast developing and highly competitive mobile communications market. Licensees will be able to include the Series 60 into their own smart phone designs, thus speeding up the rollout of new phone models at lower costs.

The Series 60 is a comprehensive software platform for smart phones, created for mobile phone users that demand easy-to-use, one-hand operated handsets with high-quality colour screens, rich communications and enhanced applications. The Series 60 platform consists of the key telephony and personal information management applications, the browser and messaging clients, as well as a complete and modifiable user interface, all designed to run on top of the Symbian OS, an operating system for advanced, data enabled mobile phones.

——————————————————————————–

For further information, please contact

Marijke van Hooren
Sendo
Phone:+44 (0) 121 251 5060
Mobile:+44 (0) 7968 820 701
[email”>mvanhooren@sendo.com[/email”>

Nokia Corporate Communications (Americas)
Phone:+1 972 894 4875

Nokia Mobile Software Communications
Phone:+358 7 180 08000
[email”>nokia.mobile.phones@nokia.com[/email”>
www.nokia.com

——————————————————————————–

About Sendo
Sendo, headquartered in the United Kingdom, started shipping its first terminals to operator customers in Europe and Asia in May 2001. The company is now shipping five products in over twenty countries in Europe and Asia, with the USA soon to follow. Sendo offers high-performance, competitively priced, reliable products and services to the cellular market. Sendo has been established with the needs of the wireless carriers and consumers in mind. The company offers a complete custom program, from exclusively branded phones, matched fulfillment programs and software with dedicated services. Details of the company are available at www.sendo.com

About Nokia
Nokia is the world leader in mobile communications. Backed by its experience, innovation, user-friendliness and reliable solutions, the company has become the leading supplier of mobile phones and a leading supplier of mobile, fixed broadband and IP networks. By adding mobility to the Internet Nokia creates new opportunities for companies and further enriches the daily lives of people. Nokia is a broadly held company with listings on six major exchanges.

Real Hacking Rules! (What Is the Essence of Hacking?)

Real Hacking Rules!
Or, Before the Word is Totally Useless, What Is the Essence of Hacking?
by Richard Thieme
10/04/2002
http://www.oreillynet.com/pub/a/network/2002/10/04/hackers.html

On the tenth anniversary of Def Con, the annual Las Vegas meeting of computer hackers, security professionals, and others, I reflected on how the con–and hacking–had changed since I spoke at Def Con 4 seven years earlier.

The word hacker today means everything from digging into a system–any system–at root level to defacing a Web site with graffiti. Because we have to define what we mean whenever we use the term, the word is lost to common usage. Still, post 9/11 and the Patriot Act, it behooves hackers of any definition to be keenly aware of the ends to which they hack. Hackers must know their roots and know how to return to “root” when necessary.

At Def Con 4 I said that hacking was practice for transplanetary life in the 21st century. I was right. The skills I foresaw as essential just a short generation ahead have indeed been developed by the best of the hacker community, who helped to create–and secure–the Net that is now ubiquitous. But the game of building and cracking security, managing multiple identities, and obsessing over solving puzzles is played now on a ten-dimensional chess board. Morphing boundaries at every level of organizational structure have created a new game.

In essence, hacking is a way of thinking about complex systems. It includes the skills required to cobble together seemingly disparate pieces of a puzzle in order to understand the system; whether modules of code or pieces of a bigger societal puzzle, hackers intuitively grasp and look for the bigger picture that makes sense of the parts. So defined, hacking is a high calling. Hacking includes defining and defending identity, creating safe boundaries, and searching for the larger truth in a maze of confusion and intentional disinformation.

In the national security state that has evolved since World War II, hacking is one means by which a free people can retain freedom. Hacking includes the means and methodologies by which we construct more comprehensive truths or images of the systems we hack.

Hackers cross disciplinary lines. In addition to computer hackers, forensic accountants (whistleblowers, really), investigative journalists (“conspiracy theorists”), even shamans are hackers because hacking means hacking both the system and the mind that made it. That’s why, when you finally understand Linux, you understand … everything.

The more complex the system, the more challenging the puzzles, the more exhilarating the quest. Edward O. Wilson said in Consilience that great scientists are characterized by a passion for knowledge, obsessiveness, and daring.

Real hackers too.

The Cold War mentality drew the geopolitical map of the world as opposing alliances; now the map is more complex, defining fluid alliances in terms of non-state actors, narcotics/weapons-traffickers, and incendiary terrorist cells. Still, the game is the same: America sees itself as a huge bulls-eye always on the defensive.

In this interpretation, the mind of society is both target and weapon and the management of perception–from deception and psychological operations to propaganda, spin, and public relations–is its cornerstone.

That means that the modules of truth that must be connected to form the bigger picture are often exchanged in a black market. The machinery of that black market is hacking.

Here’s an example:

A colleague was called by a source after a major blackout in the Pacific Northwest. The source claimed that the official explanation for the blackout was bogus. Instead, he suggested, a non-state aggressor such as a narco-terrorist had probably provided a demonstration of power, attacking the electric grid as a show of force.

“The proof will come,” he said, “if it happens again in a few days.”

A few days later, another blackout hit the area.

Fast-forward to a security conference at which an Army officer and I began chatting. One of his stories made him really chuckle.

“We were in the desert,” he said, “testing an electromagnetic weapon. It was high-level stuff. We needed a phone call from the Secretary of Defense to hit the switch. When we did, we turned out the lights all over the Pacific Northwest.” He added, “Just to be sure, we did it again a few days later and it happened again.”

That story is a metaphor for life in a national security state.

That test took place in a secured area that was, in effect, an entire canyon. Cover stories were prepared for people who might wander in, cover stories for every level of clearance, so each narrative would fuse seamlessly with how different people “constructed reality.”

The journalistic source was correct in knowing that the official story didn’t account for the details. He knew it was false but didn’t know what was true. In the absence of truth, we make it up. Only when we have the real data, including the way the data has been rewritten to obscure the truth, can we know what is happening.

That’s hacking on a societal level. Hacking is knowing how to discern or retrieve information beyond that which is designed for official consumption. It is abstract thinking at the highest level, practical knowledge of what’s likely, or might, or must be true, if this little piece is true, informed by an intuition so tutored over time it looks like magic.

Post 9/11, the distinction between youthful adventuring and reconstituting the bigger picture on behalf of the greater good is critical. What was trivial mischief that once got a slap on the wrist is now an act of terrorism, setting up a teenager for a long prison term. The advent of global terrorism and the beginning of the Third World War have changed the name of the game.

Yet without checks and balances, we will go too far in the other direction. The FBI in Boston is currently notorious for imprisoning innocent men to protect criminal allies. I would guess that the agents who initiated that strategy had good intentions. But good intentions go awry. Without transparency, there is no truth. Without truth, there is no accountability. Without accountability, there is no justice.

Hacking ensures transparency. Hacking is about being free in a world in which we understand that we will never be totally free.

Nevertheless, hackers must roll the boulder up the hill. They have no choice but to be who they are. But they must understand the context in which they work and the seriousness of the consequences when they don’t.

Hackers must, as the Good Book says, be wise as serpents and innocent as doves.

Richard Thieme is a business consultant, writer, and professional speaker focused on “life on the edge,” in particular the human dimension of technology and the work place, change management and organizational effectiveness.

the next big thang… – gentoo linux?

http://www.gentoo.org/

O’Reilly art
In the article, a brief overview of some of the features of Gentoo Linux, Daniel expounds on what kinds of enhancements users can expect in the 1.4 final release: support for true 64-bit on the UltraSparc architecture, KDE 3.0.4, a gentoo-sources kernel with Andrea Archangeli’s 3.5GB “user address space” patch and grsec, and of course the new Gentoo Reference Platform for fast binary installs.

Googling Your Email by Jon Udell

Googling Your Email
by Jon Udell
10/07/2002
http://www.oreillynet.com/pub/a/network/2002/10/07/udell.html

Someday we’ll tell our grandchildren about those moments of epiphany, back in the last century, when we first glimpsed how the Web would change our relationship to the world. For me, one of those moments came when I was looking for an ODBC driver kit that I knew was on a CD somewhere in my office. After rifling through my piles of clutter to no avail, I tried rifling through AltaVista’s index. Bingo! Downloading those couple of megabytes over our 56K leased line to the Internet was, to be sure, way slower than my CD-ROM drive’s transfer rate would have been, but since I couldn’t lay my hands on the CD, it was a moot point. Through AltaVista I could find, and then possess, things that I already possessed but could not find.

There began an odd inversion that continues to the present day. Any data that’s public, and that Google can see, is hardly worth storing and organizing. We simply search for what we need, when we need it: just-in-time information management. But since we don’t admit Google to our private data stores — Intranets [1″> and mailboxes, for example — we’re still like the shoemaker’s barefoot children. Most of us can find all sorts of obscure things more easily than we can find the file that Tom sent Leslie last week.

What would it be like to Google your email? Raphaël Szwarc’s ZOË is a clever piece of software that explores this idea. It’s written in Java (source available), so it can be debugged and run everywhere. ZOË is implemented as a collection of services. Startup is as simple as unpacking the zipped tarball and launching ZOË.jar. The services that fire up include a local Web server that handles the browser-based UI, a text indexing engine, a POP client and server, and an SMTP server.

Because ZOË has a Web-style architecture, you can use it remotely as well as locally. At the moment, for example, I’m running ZOË on a Mac OS X box in my office, but browsing into it from my wirelessly connected laptop outside. I wouldn’t recommend this, however, since ZOË’s Web server has no access controls in place. By contrast, Radio Userland — also a local, Web-server-based application, which I’m currently running on a Windows XP box in my office and browsing into remotely — does offer HTTP basic authentication, though not over SSL. In the WiFi era, you have to be aware of which local services are truly local.

ZOË doesn’t aim to replace your email client, but rather to proxy your mail traffic and build useful search and navigation mechanisms. At the moment, I’m using ZOË together with Outlook (on Windows XP) and Entourage (on MacOSX). ZOË’s POP client sucks down and indexes my incoming mail in parallel with my regular clients. (I leave a cache of messages on the server so the clients don’t step on one another.) By routing my outbound mail through ZOË’s SMTP server, it gets to capture and index that as well. Here’s a typical search result.

[see original web site screen shot”>

ZOË helps by contextualizing the results, then extracting and listing Contributors (the message senders), Attachments, and Links (such as the URL strings found in the messages). These context items are all hyperlinks. Clicking “Doug Dineley” produces the set of messages from Doug, like so:

Following Weblog convention, the # sign preceding Doug’s name is a permalink. It assigns a URL to the query “find all of Doug’s messages,” so you can bookmark it or save it on the desktop.

Note also the breadcrumb trail that ZOË has built:

ZOË -> Com -> InfoWorld

These are links too, and they lead to directories that ZOË has automatically built. Here’s the view after clicking the InfoWorld link:

[see original web site screen shot”>

Nice! Along with the directory of names, ZOË has organized all of the URLs that appear in my InfoWorld-related messages. This would be even more interesting if those URLs were named descriptively, but of course, that’s a hard thing to do. Alternatively, ZOË could spider those URLs and produce a view offering contextual summaries of them. We don’t normally think of desktop applications doing things like that, but ZOË (like Google) is really a service, working all the time, toiling in ways that computers should and people shouldn’t.

When we talk about distributed Web services, we ought not lose sight of the ones that run on our own machines, and have access to our private data. ZOË reminds us how powerful these personal services can be. It also invites us to imagine even richer uses for them.

Fast, fulltext search, for example, is only part of the value that ZOË adds. Equally useful is the context it supplies. That, of course, relies on the standard metadata items available in email: Subject, Date, From. Like all mail archivers, ZOË tries to group messages into threads, and like all of them, it is limited by the unfortunate failure of mail clients to use References or In-Reply-To headers in a consistent way. Threading, therefore, depends on matching the text of Subject headers and sacrifices a lot of useful context.

For years, I’ve hoped email clients would begin to support custom metadata tags that would enable more robust contextualization — even better than accurate threading would provide. My working life is organized around projects, and every project has associated with it a set of email messages. In Outlook, I use filtering and folders to organize messages by project. Unfortunately, there’s no way to reuse that effort. The structure I impose on my mail store cannot be shared with other software, or with other people. Neither can the filtering rules that help me maintain that structure. This is crazy! We need to start to think of desktop applications not only as consumers of services, but also as producers of them. If Outlook’s filters were Web services, for example, then ZOË — running on the same or another machine — could make use of them.

Services could flow in the other direction, too. For example, ZOË spends a lot of time doing textual analysis of email. Most of the correlations I perform manually, using Outlook folders, could be inferred by a hypothetical version of ZOË that would group messages based on matching content in their bodies as well as in their headers, then generate titles for these groups by summarizing them. There should be no need for Outlook to duplicate these structures. ZOË could simply offer them as a metadata feed, just as it currently offers an RSS feed that summarizes the current day’s messages.

At InfoWorld’s recent Web services conference, Google’s cofounder Sergey Brin gave a keynote talk. Afterward, somebody asked him to weigh in on RDF and the semantic Web. “Look,” he said, “putting angle brackets around everything is not a technology, by itself. I’d rather make progress by having computers understand what humans write, than to force humans to write in ways computers can understand.” I’ve always thought that we need to find more and better ways to capture metadata when we communicate. But I’ve got to admit that the filtering and folders I use in Outlook require more effort than most people will ever be willing to invest. There may yet turn out to be ways to make writing the semantic Web easy and natural. Meanwhile, Google and, now, ZOË remind us that we can still add plenty of value to the poorly-structured stuff that we write every day. It’s a brute-force strategy, to be sure, but isn’t that why we have these 2GHz personal computers?

Jon Udell is lead analyst for the InfoWorld Test Center.
——————————————————————————–

1 Users of the Google Search Appliance do, of course, invite Google behind the firewall.

The Case Against Micropayments

The Case Against Micropayments
by Clay Shirky
12/19/2000
http://www.openp2p.com/pub/a/p2p/2000/12/19/micropayments.html

Micropayments are back, at least in theory, thanks to P2P. Micropayments are an idea with a long history and a disputed definition – as the W3C micropayment working group puts it, ” … there is no clear definition of a ‘Web micropayment’ that encompasses all systems,” but in its broadest definition, the word micropayment refers to “low-value electronic financial transactions.”

P2P creates two problems that micropayments seem ideally suited to solve. The first is the need to reward creators of text, graphics, music or video without the overhead of publishing middlemen or the necessity to charge high prices. The success of music-sharing systems such as Napster and Audiogalaxy, and the growth of more general platforms for file sharing such as Gnutella, Freenet and AIMster, make this problem urgent.

The other, more general P2P problem micropayments seem to solve is the need for efficient markets. Proponents believe that micropayments are ideal not just for paying artists and musicians, but for providers of any resource – spare cycles, spare disk space, and so on. Accordingly, micropayments are a necessary precondition for the efficient use of distributed resources.

Jakob Nielsen, in his essay The Case for Micropayments writes, “I predict that most sites that are not financed through traditional product sales will move to micropayments in less than two years,” and Nicholas Negroponte makes an even shorter-term prediction: “You’re going to see within the next year an extraordinary movement on the Web of systems for micropayment … .” He goes on to predict micropayment revenues in the tens or hundreds of billions of dollars.

Alas for micropayments, both of these predictions were made in 1998. (In 1999, Nielsen reiterated his position, saying, “I now finally believe that the first wave of micropayment services will hit in 2000.”) And here it is, the end of 2000. Not only did we not get the flying cars, we didn’t get micropayments either. What happened?

Micropayments: An Idea Whose Time Has Gone
Micropayment systems have not failed because of poor implementation; they have failed because they are a bad idea. Furthermore, since their weakness is systemic, they will continue to fail in the future.

Proponents of micropayments often argue that the real world demonstrates user acceptance: Micropayments are used in a number of household utilities such as electricity, gas, and most germanely telecom services like long distance.

These arguments run aground on the historical record. There have been a number of attempts to implement micropayments, and they have not caught on in even in a modest fashion – a partial list of floundering or failed systems includes FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, MicroMint and Cybercent. If there was going to be broad user support, we would have seen some glimmer of it by now.

Furthermore, businesses like the gas company and the phone company that use micropayments offline share one characteristic: They are all monopolies or cartels. In situations where there is real competition, providers are usually forced to drop “pay as you go” schemes in response to user preference, because if they don’t, anyone who can offer flat-rate pricing becomes the market leader. (See sidebar: “Simplicity in pricing.”) Simplicity in pricing

The historical record for user preferences in telecom has been particularly clear. In Andrew Odlyzko’s seminal work, The history of communications and its implications for the Internet, he puts it this way:

“There are repeating patterns in the histories of communication technologies, including ordinary mail, the telegraph, the telephone, and the Internet. In particular, the typical story for each service is that quality rises, prices decrease, and usage increases to produce increased total revenues. At the same time, prices become simpler.

“The historical analogies of this paper suggest that the Internet will evolve in a similar way, towards simplicity. The schemes that aim to provide differentiated service levels and sophisticated pricing schemes are unlikely to be widely adopted.”

Why have micropayments failed? There’s a short answer and a long one. The short answer captures micropayment’s fatal weakness; the long one just provides additional detail.

The Short Answer for Why Micropayments Fail
Users hate them.

The Long Answer for Why Micropayments Fail
Why does it matter that users hate micropayments? Because users are the ones with the money, and micropayments do not take user preferences into account.

In particular, users want predictable and simple pricing. Micropayments, meanwhile, waste the users’ mental effort in order to conserve cheap resources, by creating many tiny, unpredictable transactions. Micropayments thus create in the mind of the user both anxiety and confusion, characteristics that users have not heretofore been known to actively seek out.

Anxiety and the Double-Standard of Decision Making
Many people working on micropayments emphasize the need for simplicity in the implementation. Indeed, the W3C is working on a micropayment system embedded within a link itself, an attempt to make the decision to purchase almost literally a no-brainer.

Embedding the micropayment into the link would seem to take the intrusiveness of the micropayment to an absolute minimum, but in fact it creates a double-standard. A transaction can’t be worth so much as to require a decision but worth so little that that decision is automatic. There is a certain amount of anxiety involved in any decision to buy, no matter how small, and it derives not from the interface used or the time required, but from the very act of deciding.

Micropayments, like all payments, require a comparison: “Is this much of X worth that much of Y?” There is a minimum mental transaction cost created by this fact that cannot be optimized away, because the only transaction a user will be willing to approve with no thought will be one that costs them nothing, which is no transaction at all.

Thus the anxiety of buying is a permanent feature of micropayment systems, since economic decisions are made on the margin – not, “Is a drink worth a dollar?” but, “Is the next drink worth the next dollar?” Anything that requires the user to approve a transaction creates this anxiety, no matter what the mechanism for deciding or paying is.

The desired state for micropayments – “Get the user to authorize payment without creating any overhead” – can thus never be achieved, because the anxiety of decision making creates overhead. No matter how simple the interface is, there will always be transactions too small to be worth the hassle.

Confusion and the Double-Standard of Value
Even accepting the anxiety of deciding as a permanent feature of commerce, micropayments would still seem to have an advantage over larger payments, since the cost of the transaction is so low. Who could haggle over a penny’s worth of content? After all, people routinely leave extra pennies in a jar by the cashier. Surely amounts this small makes valuing a micropayment transaction effortless?

Here again micropayments create a double-standard. One cannot tell users that they need to place a monetary value on something while also suggesting that the fee charged is functionally zero. This creates confusion – if the message to the user is that paying a penny for something makes it effectively free, then why isn’t it actually free? Alternatively, if the user is being forced to assent to a debit, how can they behave as if they are not spending money?

Beneath a certain price, goods or services become harder to value, not easier, because the X for Y comparison becomes more confusing, not less. Users have no trouble deciding whether a $1 newspaper is worthwhile – did it interest you, did it keep you from getting bored, did reading it let you sound up to date – but how could you decide whether each part of the newspaper is worth a penny?

Was each of 100 individual stories in the newspaper worth a penny, even though you didn’t read all of them? Was each of the 25 stories you read worth 4 cents apiece? If you read a story halfway through, was it worth half what a full story was worth? And so on.

When you disaggregate a newspaper, it becomes harder to value, not easier. By accepting that different people will find different things interesting, and by rolling all of those things together, a newspaper achieves what micropayments cannot: clarity in pricing.

The very micro-ness of micropayments makes them confusing. At the very least, users will be persistently puzzled over the conflicting messages of “This is worth so much you have to decide whether to buy it or not” and “This is worth so little that it has virtually no cost to you.”

User Preferences
Micropayment advocates mistakenly believe that efficient allocation of resources is the purpose of markets. Efficiency is a byproduct of market systems, not their goal. The reasons markets work are not because users have embraced efficiency but because markets are the best place to allow users to maximize their preferences, and very often their preferences are not for conservation of cheap resources.

Imagine you are moving and need to buy cardboard boxes. Now you could go and measure the height, width, and depth of every object in your house – every book, every fork, every shoe – and then create 3D models of how these objects could be most densely packed into cardboard boxes, and only then buy the actual boxes. This would allow you to use the minimum number of boxes.

But you don’t care about cardboard boxes, you care about moving, so spending time and effort to calculate the exact number of boxes conserves boxes but wastes time. Furthermore, you know that having one box too many is not nearly as bad as having one box too few, so you will be willing to guess how many boxes you will need, and then pad the number.

For low-cost items, in other words, you are willing to overpay for cheap resources, in order to have a system that maximizes other, more important, preferences. Micropayment systems, by contrast, typically treat cheap resources (content, cycles, disk) as precious commodities, while treating the user’s time as if were so abundant as to be free.

Micropayments Are Just Payments
Neither the difficulties posed by mental transaction costs nor the the historical record of user demand for simple, predictable pricing offers much hope for micropayments. In fact, as happened with earlier experiments attempting to replace cash with “smart cards,” a new form of financial infrastructure turned out to be unnecessary when the existing infrastructure proved flexible enough to be modified. Smart cards as cash replacements failed because the existing credit card infrastructure was extended to include both debit cards and ubiquitous card-reading terminals.

So it is with micropayments. The closest thing we have to functioning micropayment systems, Qpass and Paypal, are simply new interfaces to the existing credit card infrastructure. These services do not lower mental transaction costs nor do they make it any easier for a user to value a penny’s worth of anything – they simply make it possible for users to spend their money once they’ve decided to.

Micropayment systems are simply payment systems, and the size and frequency of the average purchase will be set by the user’s willingness to spend, not by special infrastructure or interfaces. There is no magic bullet – only payment systems that work within user expectations can succeed, and users will not tolerate many tiny payments.

Old Solutions
This still leaves the problems that micropayments were meant to solve. How to balance users’ strong preference for simple pricing with the enormous number of cheap, but not free, things available on the Net?

Micropayment advocates often act as if this is a problem particular to the Internet, but the real world abounds with items of vanishingly small value: a single stick of gum, a single newspaper article, a single day’s rent. There are three principal solutions to this problem offline – aggregation, subscription, and subsidy – that are used individually or in combination. It is these same solutions – and not micropayments – that are likely to prevail online as well.

Aggregation
Aggregation follows the newspaper example earlier – gather together a large number of low-value things, and bundle them into a single higher-value transaction.

Call this the “Disneyland” pricing model – entrance to the park costs money, and all the rides are free. Likewise, the newspaper has a single cost, that, once paid, gives the user free access to all the stories.

Aggregation also smoothes out the differences in preferences. Imagine a newspaper sold in three separate sections – news, business, and sports. Now imagine that Curly would pay a nickel to get the news section, a dime for business, and a dime for sports; Moe would pay a dime each for news and business but only a nickel for sports; and Larry would pay a dime, a nickel, and a dime.

If the newspaper charges a nickel a section, each man will buy all three sections, for 15 cents. If it prices each section at a dime, each man will opt out of one section, paying a total of 20 cents. If the newspaper aggregates all three sections together, however, Curly, Moe and Larry will all agree to pay 25 cents for the whole, even though they value the parts differently.

Aggregation thus not only lowers the mental transaction costs associated with micropayments by bundling several purchase decisions together, it creates economic efficiencies unavailable in a world where each resource is priced separately.

Subscription
A subscription is a way of bundling diverse materials together over a set period, in return for a set fee from the user. As the newspaper example demonstrates, aggregation and subscription can work together for the same bundle of assets.

Subscription is more than just aggregation in time. Money’s value is variable – $100 today is better than $100 a month from now. Furthermore, producers value predictability no less than consumers, so producers are often willing to trade lower subscription prices in return for lump sum payments and more predictable revenue stream.

Long-term incentives

Game theory fans will recognize subscription arrangements as an Iterated Prisoner’s Dilemma, where the producer’s incentive to ship substandard product or the consumer’s to take resources without paying is dampened by the repetition of delivery and payment.

Subscription also serves as a reputation management system. Because producer and consumer are more known to one another in a subscription arrangement than in one-off purchases, and because the consumer expects steady production from the producer, while the producer hopes for renewed subscriptions from the consumer, both sides have an incentive to live up to their part of the bargain, as a way of creating long-term value. (See sidebar: “Long-term incentives”.)

Subsidy
Subsidy is by far the most common form of pricing for the resources micropayments were meant to target. Subsidy is simply getting someone other than the audience to offset costs. Again, the newspaper example shows that subsidy can exist alongside aggregation and subscription, since the advertisers subsidize most, and in some cases all, of a newspaper’s costs. Advertising subsidy is the normal form of revenue for most Web sites offering content.

The biggest source of subsidy on the Net overall, however, is from the the users themselves. The weblog movement, where users generate daily logs of their thoughts and interests, is typically user subsidized – both the time and the resources needed to generate and distribute the content are donated by the user as a labor of love.

Indeed, even as the micropayment movement imagines a world where charging for resources becomes easy enough to spawn a new class of professionals, what seems to be happening is that the resources are becoming cheap enough to allow amateurs to easily subsidize their own work.

Against users’ distaste for micropayments, the tools of aggregation, subscription and subsidy will be the principle tools for bridging the gap between atomized resources and demand for simple, predictable pricing.

Playing by the Users’ Rules
Micropayment proponents have long suggested that micropayments will work because it would be great if they did. A functioning micropayment system would solve several thorny financial problems all at once. Unfortunately, the barriers to micropayments are not problems of technology and interface, but user approval. The advantage of micropayment systems to people receiving micropayments is clear, but the value to users whose money and time is involved isn’t.

Because of transactional inefficiencies, user resistance, and the increasing flexibility of the existing financial framework, micropayments will never become a general class of network application. Anyone setting out to build systems that reward resource providers will have to create payment systems that provides users the kind of financial experience they demand – simple, predictable and easily valued. Only solutions that play by these rules will succeed.

——————————————————————————–

Clay Shirky is a Partner at The Accelerator Group. He writes extensively about the social and economic effects of the internet for the O’Reilly Network, Business 2.0, and FEED.

genomics, nanotechnology, The Economist and Red Herring’s view

[b”>
The locus of innovation
Have information technology and communications become boring?
[/b”>
by Jason Pontin
September 27, 2002
http://www.redherring.com/columns/2002/friday/lastword092702.html

The Economist said it, and therefore it must be true. In the latest Technology Quarterly, published in the September 21 issue of the news magazine, the editors write, “A glance at where, and for what, patents are now being granted, suggests that innovation has begun to move away from telecoms, computing, and ecommerce towards fresher pastures–especially in genomics and nanotechnology.”

Do we really believe this? Surely the smarmy British magazine has an answerable point when it notes, “The excessive exuberance during the run-up to the millennium has saddled the IT industry worldwide with $750 billion of debt and some $250 billion of overcapacity. That is an awfully big hangover to overcome.”

Nor have I forgotten that last week I essentially agreed with Charles Fitzgerald, Microsoft’s chief propagandist, when he said that, so far as software was concerned, “I am a believer in the mundane future.”

Finally, both as a future patient of drug and genetic therapies and as someone interested in new technology, I am excited by the convergence of the life sciences, computing, and nanotechnology. Imagine a future where quantum dots in your body detect a cellular catastrophe like an epileptic stroke or heart attack, and chips in your blood stream deliver a drug perfectly designed to stop that catastrophe before it can seriously harm your organism. All without serious side effects. Sound far-fetched? Science-fictional? It’s only years away; it’s in clinical trials now.

But I have been writing about information technology for almost a decade, and I am equally certain of one other thing: IT is as cyclical as a manic depressive’s moods. While this “bottom” exceeds in scale and seriousness anything in the history of computing, information technologists always seem to insist that their industry has become a boring, commodities-based sector just before a kid in some university dorm dreams up something that fundamentally changes the way businesses work and ordinary folks conduct their lives.

Alas, at the moment we don’t know what this something will be. Cringely’s Law says that in the short term things change much less than we expect, but that we have absolutely no idea what will happen in the long term. Therefore I believe this: biotechnology and nanotechnology will be the locus of innovation and wealth-creation in the immediate future. I recognize that certain structural difficulties in IT and telecom must be addressed before any renaissance can occur–specifically, all that debt and excess capacity must be reduced, and the “last mile” must be conquered and broadband Internet access brought to every American home at an affordable price.

But I will not write off IT quite yet. While the immediate future may be mundane, I am certain that further in the future we will have another computing revolution that will excite investors, consumers, and businesses as much as personal computers and the Internet once excited them. I think I even know what that revolution will be: an “always on,” distributed, intelligent network.

It’s a great time to be an entrepreneur. Capital is cheap, there are few distractions, and educated technical and professional labor is available. Go get ’em, tigers.

Write to [email”>jason.pontin@redherring.com[/email”>

Inventor foresees implanted sensors aiding brain functions

[b”>Inventor foresees implanted sensors aiding brain functions[/b”>
By Stephan Ohr, EE Times
Sep 26, 2002 (6:32 AM)

URL: http://www.eetimes.com/story/OEG20020926S0013

BOSTON — Using deliberately provocative predictions, speech-recognition pioneer Ray Kurzweil said that by 2030 nanosensors could be injected into the human bloodstream, implanted microchips could amplify or supplant some brain functions, and individuals could share memories and inner experiences by “beaming” them electronically to others.

Virtual reality can already amplify sensory experiences and spontaneously change an individual’s identity or sex, Kurzweil said in a keynote entitled “The Rapidly Shrinking Sensor: Merging Bodies and Brain,” at the Fall Sensors Expo conference and exhibition here.

Recently inducted into the National Inventors Hall of Fame for his work in speech synthesis and recognition, Kurzweil has also invented an “omni-font” optical character recognition system, a CCD flat-bed scanner and a full text-to-speech synthesizer.

Noting the accelerating rate of technological progress, Kurzweil said, “There is a much smaller time for ‘paradigm shifts’ — what took 50 years to develop in the past won’t take 50 years to develop in the future.” One-hundred years of progress might easily be reduced to 25 years or less, he said.

“Moore’s Law is just one example: All the progress of the 20th century could duplicate itself within the next 14 years,” Kurzweil said. By some measures, perhaps, the 21st century will represent 20,000 years of progress, he said. With such acceleration it becomes possible to visualize an interaction with technology that was previously reserved to science fiction writers, he said.

Current trends will make it possible to “reverse engineer” the human brain by 2020. And “$1,000 worth of computation,” which barely covered the cost of a 8088-based IBM PC in 1982, will offer 1,000 times the capability of the human brain by 2029.

Kurzweil was enthusiastic about his own experiments with virtual reality and artificial intelligence. “People say of AI, ‘Nothing ever came of that,’ yet it keeps spinning off new things,” he said. For example, British Airways has combined speech recognition and synthesis technology with virtual reality to create an interactive reservation system that allows a user to interact with a “virtual personality” to build a travel itinerary.

Via the Internet, Kurzweil demonstrated “Ramona,” a woman’s face that serves as an interactive interface to Kurzweil’s Web site.

[b”>Trading places[/b”>
By projecting a virtual reality onto the Internet, it is possible to exchange personalities or don another personality. A video clip presented during the Sensors Expo keynote demonstrated how Kurzweil became Ramona on another user’s screen. As Ramona, he performed a song and dance among a toe-stepping chorus of fat men in tutus. “That heavy set man behind me was my daughter,” Kurzweil said.

“AI is about making computers do intelligent things,” Kurzweil said amid laugher and applause. “In terms of ‘common sense,’ humans are more advanced than computers . . . Yet the human brain makes only about 200 calculations per second.” The computing machinery available in 2030 will be able to make 100 trillion connections and 1026 calculations per second, he said. And the memory footprint — 12 million bytes; Kurzweil could resist the jest — would be smaller than Microsoft Word.

Even now, manufacturers and research groups are experimenting with wearable computers utilizing magnetic and RF sensors embedded in clothing. Just as MIT’s wearable computers enable business users to exchange business cards simply by shaking hands, Kurzweil believes it will be possible to “beam” someone your experience, tapping all five senses.

With so much intelligence embodied in sensors and microchips, Kurzweil speculated that between 2030 and 2040 non-biological intelligence would become dominant. But his conjecture rejected the common image of the science-fiction cyborg: Instead of mechanically bonding with micromachines or “nano-bots,” might it be possible to swallow them like pills?, he ashed. Or to inject them directly into the bloodstream? Why not explore how such human-computer pairings could increase life expectancy?

Cochlea implants are already rebuilding the hearing of previously deaf patients, and implanted chips have been shown to aid the muscle control of patients with Parkinson’s disease.

Kurzweil also offered a possible downside to his images of humans merged with computing machinery, reminiscent of computer viruses: “Think of this: some year, self-replicating nanotechnology could be considered a form of cancer.”

Copyright 2002 © CMP Media, LLC

HELP, THE PRICE OF INFORMATION HAS FALLEN AND IT CAN’T GET UP

HELP, THE PRICE OF INFORMATION HAS FALLEN AND IT CAN’T GET UP [ACM, 04/97″>
http://www.shirky.com/writings/information_price.html

Among people who publish what is rather deprecatingly called ‘content’ on the Internet, there has been an oft repeated refrain which runs thusly:
‘Users will eventually pay for content.’

or sometimes, more petulantly,

‘Users will eventually have to pay for content.’

It seems worth noting that the people who think this are wrong.

The price of information has not only gone into free fall in the last few years, it is still in free fall now, it will continue to fall long before it hits bottom, and when it does whole categories of currently lucrative businesses will be either transfigured unrecognizably or completely wiped out, and there is nothing anyone can do about it.

ECONOMICS 101

The basic assumption behind the fond hope for direct user fees for content is a simple theory of pricing, sometimes called ‘cost plus’, where the price of any given thing is determined by figuring out its cost to produce and distribute, and then adding some profit margin. The profit margin for your groceries is in the 1-2% range, while the margin for diamonds is often greater than the original cost, i.e greater than 100%.

Using this theory, the value of information distributed online could theoretically be derived by deducting the costs of production and distribution of the physical objects (books, newspapers, CD-ROMs) from the final cost and reapplying the profit margin. If paying writers and editors for a book manuscript incurs 50% of the costs, and printing and distributing it makes up the other 50%, then offering the book as downloadable electronic text should theoretically cut 50% (but only 50%) of the cost.

If that book enjoys the same profit margins in its electronic version as in its physical version, then the overall profits will also be cut 50%, but this should (again, theoretically) still be enough profit to act as an incentive, since one could now produce two books for the same cost.

ECONOMICS 201

So what’s wrong with that theory? Why isn’t the price of the online version of your hometown newspaper equal to the cover price of the physical product minus the incremental costs of production and distribution? Why can’t you download the latest Tom Clancy novel for $8.97?

Remember the law of supply and demand? While there are many economic conditions which defy this old saw, its basic precepts are worth remembering. Prices rise when demand outstrips supply, even if both are falling. Prices fall when supply outstrips demand, even if both are rising. This second state describes the network perfectly, since the Web is growing even faster than the number of new users.

From the point of view of our hapless hopeful ‘content provider’, waiting for the largesse of beneficent users, the primary benefits from the network come in the form of cost savings from storage and distribution, and in access to users worldwide. From their point of view, using the network is (or ought to be) an enormous plus as a way of cutting costs.

This desire on the part of publishers of various stripes to cut costs by offering their wares over the network misconstrues what their readers are paying for. Much of what people are rewarding businesses for when they pay for ‘content’, even if they don’t recognize it, is not merely creating the content but producing and distributing it. Transporting dictionaries or magazines or weekly shoppers is hard work, and requires a significant investment. People are also paying for proximity, since the willingness of the producer to move newspapers 15 miles and books 1500 miles means that users only have to travel 15 feet to get a paper on their doorstep and 15 miles to get a book in the store.

Because of these difficulties in overcoming geography, there is some small upper limit to the number of players who can sucessfully make a business out of anything which requires such a distribution network. This in turn means that this small group (magazine publishers, bookstores, retail software outlets, etc.) can command relatively high profit margins.

ECONOMICS 100101101

The network changes all of that, in way ill-understood by many traditional publishers. Now that the cost of being a global publisher has dropped to an up-front investment of $1000 and a monthly fee of $19.95, (and those charges are half of what they were a year ago and twice what they will be a year from now), being able to offer your product more cheaply around the world offers no competitive edge, given that everyone else in the world, even people and organizations who were not formerly your competitors, can now effortlessly reach people in your geographic locale as well.

To take newspapers as a test case, there is a delicate equilibrium between profitibility and geography in the newspaper business. Most newspapers determine what regions they cover by finding (whether theoretically or experiemntally) the geographic perimeter where the cost of trucking the newspaper outweighs the willingess of the residents to pay for it. Over the decades, the US has settled into a patchwork of abutting borders of local and regional newspapers.

The Internet destroys any cost associated with geographic distribution, which means that even though each individual paper can now reach a much wider theoretical audience, the competition also increases for all papers by orders of magnitude. This much increased competition means that anyone who can figure out how to deliver a product to the consumer for free (usually by paying the writers and producers from advertising revenues instead of direct user fees, as network television does) will have a huge advantage over its competitors.

ITS HARD TO COMPETE WITH FREE.

To see how this would work, consider these three thought experiments showing how the cost to users of formerly expensive products can fall to zero, permanently.

Greeting Cards
Greeting card companies have a nominal product, a piece of folded paper with some combination of words and pictures on it. In reality, however, the greeting card business is mostly a service industry, where the service being sold is convenience. If greeting card companies kept all the cards in a central warehouse, and people needing to send a card had to order it days in advance, sales would plummet. The real selling point of greeting cards is immediate availability – they’re on every street corner and in every mall.

Considered in this light, it is easy to see how the network destroys any issue of convenience, since all Web sites are equally convenient (or inconvenient, depending on bandwidth) to get to. This ubiquity is a product of the network, so the value of an online ‘card’ is a fraction of its offline value. Likewise, since the costs of linking words and images has left the world of paper and ink for the altogether cheaper arena of HTML, all the greeting card sites on the Web offer their product for free, whether as a community service, as with the original MIT greeting card site, or as a free service to their users to encourage loyalty and get attention, as many magazine publishers now do.

Once a product has entered the world of the freebies used to sell boxes of cereal, it will never become a direct source of user fees again.

Classified Ads
Newspapers make an enormous proportion of their revenues on classified ads, for everything from baby clothes to used cars to rare coins. This is partly because the lack of serious competition in their geographic area allows them to charge relatively high prices. However, this arrangement is something of a kludge, since the things being sold have a much more intricate relationship to geography than newspapers do.

You might drive three miles to buy used baby clothes, thirty for a used car and sixty for rare coins. Thus, in the economically ideal classified ad scheme, all sellers would use one single classified database nationwide, and then buyers would simply limit their searches by area. This would maximize the choice available to the buyers and the cost able to be commanded by the sellers. It would also destroy a huge source of newspapers revenue.

This is happening now. The search engines like Yahoo and Lycos, the agora of the Web, are now offering classified ads as a service to get people to use their sites more. Unlike offline classified ads, however, the service is free to both buyer and seller, since the sites are both competing with one another for differentiators in their battle to survive, and they are extracting advertising revenue (on the order of one-half of one cent) every time a page on their site is viewed.

When a product can be profitable on gross revenues of one-half of one cent per use, anyone deriving income from traditional classifieds is doomed in the long run.

Real-time stock quotes
Real time stock quotes, like the ‘ticker’ you often see running at the bottom of financial TV shows, used to cost a few hundred dollars a month, when sold directly. However, much of that money went to maintaining the infrastructure necessary to get the data from point A, the stock exchange, to point B, you. When that data is sent over the Internet, the costs of that same trip fall to very near zero for both producer and consumer.

As with classified ads, once this cost is reduced, it is comparatively easy for online financial services to offer this formerly expensive service as a freebie, in the hopes that it will help them either acquire or retain customers. In less than two years, the price to the consumer has fallen from thousands of dollars annually to all but free, never to rise again.

There is an added twist with stock quotes, however. In the market, information is only valuable as a delta between what you know and what other people know – a piece of financial information which everyone knows is worthless, since the market has already accounted for it in the current prices. Thus, in addition to making real time financial data cost less to deliver, the Internet also makes it _worth_ less to have.

TIME AIN’T MONEY IF ALL YOU’VE GOT IS TIME

This last transformation is something of a conundrum – one of the principal effects of the much-touted ‘Information Economy’ is actually to devalue information more swiftly and more fully. Information is only power if it is hard to find and easy to hold, but in an arena where it is as fluid as water, value now has to come from elsewhere.

The Internet wipes out of both the difficulty and the expense of geographic barriers to distribution, and it does it for individuals and multi-national corporations alike. “Content as product” is giving way to “content as service”, where users won’t pay for the object but will pay for its manipulation (editorial imprimatur, instant delivery, custom editing, filtering by relevance, and so on.) In my next column, I will talk about what the rising fluidity and falling cost of pure information means for the networked economy, and how value can be derived from content when traditional pricing models have collapsed.

Weblogs and the Mass Amateurization of Publishing

First published on October 3, on the ‘Networks, Economics, and Culture’ mailing list
Subscribe to the Networks, Economics, and Culture mailing list.
http://shirky.com/writings/weblogs_publishing.html

A lot of people in the weblog world are asking “How can we make money doing this?” The answer is that most of us can’t. Weblogs are not a new kind of publishing that requires a new system of financial reward. Instead, weblogs mark a radical break. They are such an efficient tool for distributing the written word that they make publishing a financially worthless activity. It’s intuitively appealing to believe that by making the connection between writer and reader more direct, weblogs will improve the environment for direct payments as well, but the opposite is true. By removing the barriers to publishing, weblogs ensure that the few people who earn anything from their weblogs will make their money indirectly.

The search for direct fees is driven by the belief that, since weblogs make publishing easy, they should lower the barriers to becoming a professional writer. This assumption has it backwards, because mass professionalization is an oxymoron; a professional class implies a minority of members. The principal effect of weblogs is instead mass amateurization.

Mass amateurization is the web’s normal pattern. Travelocity doesn’t make everyone a travel agent. It undermines the value of being travel agent at all, by fixing the inefficiencies travel agents are paid to overcome one booking at a time. Weblogs fix the inefficiencies traditional publishers are paid to overcome one book at a time, and in a world where publishing is that efficient, it is no longer an activity worth paying for.

Traditional publishing creates value in two ways. The first is intrinsic: it takes real work to publish anything in print, and more work to store, ship, and sell it. Because the up-front costs are large, and because each additional copy generates some additional cost, the number of potential publishers is limited to organizations prepared to support these costs. (These are barriers to entry.) And since it’s most efficient to distribute those costs over the widest possible audience, big publishers will outperform little ones. (These are economies of scale.) The cost of print insures that there will be a small number of publishers, and of those, the big ones will have a disproportionately large market share.

Weblogs destroy this intrinsic value, because they are a platform for the unlimited reproduction and distribution of the written word, for a low and fixed cost. No barriers to entry, no economies of scale, no limits on supply.

Print publishing also creates extrinsic value, as an indicator of quality. A book’s physical presence says “Someone thought this was worth risking money on.” Because large-scale print publishing costs so much, anyone who wants to be a published author has to convince a professionally skeptical system to take that risk. You can see how much we rely on this signal of value by reflecting on our attitudes towards vanity press publications.

Weblogs destroy this extrinsic value as well. Print publishing acts as a filter, weblogs do not. Whatever you want to offer the world — a draft of your novel, your thoughts on the war, your shopping list — you get to do it, and any filtering happens after the fact, through mechanisms like blogdex and Google. Publishing your writing in a weblog creates none of the imprimatur of having it published in print.

This destruction of value is what makes weblogs so important. We want a world where global publishing is effortless. We want a world where you don’t have to ask for help or permission to write out loud. However, when we get that world we face the paradox of oxygen and gold. Oxygen is more vital to human life than gold, but because air is abundant, oxygen is free. Weblogs make writing as abundant as air, with the same effect on price. Prior to the web, people paid for most of the words they read. Now, for a large and growing number of us, most of the words we read cost us nothing.

Webloggers waiting for micropayments and other forms of direct user fees have failed to understand the enormity of these changes. Weblogs aren’t a form of micropublishing that now needs micropayments. By removing both costs and the barriers, weblogs have drained publishing of its financial value, making a coin of the realm unnecessary.

One obvious response is to restore print economics by creating artificial scarcity: readers can’t read if they don’t pay. However, the history of generating user fees through artificial scarcity is grim. Without barriers to entry, you will almost certainly have high-quality competition that costs nothing.

This leaves only indirect methods for revenue. Advertising and sponsorships are still around, of course. There is a glut of supply, but this suggests that over time advertising dollars will migrate to the Web as a low-cost alternative to traditional media. In a similar vein, there is direct marketing. The Amazon affiliate program is already providing income for several weblogs like Gizmodo and andrewsullivan.com.

Asking for donations is another method of generating income, via the Amazon and Paypal tip jars. This is the Web version of user-supported radio, where a few users become personal sponsors, donating enough money to encourage a weblogger to keep publishing for everyone. One possible improvement on the donations front would be weblog co-ops that gathered donations on behalf of a group of webloggers, and we can expect to see weblog tote bags and donor-only URLs during pledge drives, as the weblog world embraces the strategies of publicly supported media.

And then there’s print. Right now, the people who have profited most from weblogs are the people who’ve written books about weblogging. As long as ink on paper enjoys advantages over the screen, and as long as the economics make it possible to get readers to pay, the webloggers will be a de facto farm team for the publishers of books and magazines.

But the vast majority of weblogs are amateur and will stay amateur, because a medium where someone can publish globally for no cost is ideal for those who do it for the love of the thing. Rather than spawning a million micro-publishing empires, weblogs are becoming a vast and diffuse cocktail party, where most address not “the masses” but a small circle of readers, usually friends and colleagues. This is mass amateurization, and it points to a world where participating in the conversation is its own reward.

Bloggers get paid? What the *uck is going on? But I back The BlogMD Initiative personally to win out

What is Blogging Network?
Welcome! Blogging Network is a person-to-person blogging network. It’s a place to find blogs you love, write your own blog, and get to know a few people while you’re at it.

For only $2.99 per month, you get unlimited access to all the blogs on Blogging Network. Best of all, your payment is divided between the bloggers you actually read.
http://www.bloggingnetwork.com/Blogs/


The BlogMD Initiative

August 26, 2002
Announcing the BlogMD Initiative

“The number of Weblogs now tops a half-million, by most estimates. So it’s no surprise that some bloggers, as the writers of these link-filled, diarylike sites are known, are carving some order out of chaos.

There is no easy way to search for blogs by content or popularity. The major blog directory, at portal.eatonweb.com, has only 6,000 listings. But a bevy of new sites offer interesting ways, if somewhat esoteric ones, to browse the blog universe. ..”

– The New York Times, August 22

And now, there’s one more site — or at least, project — devoted to helping match readers with writers, and to advancing the work of making the Blogosphere an easier neighborhood to get around in.

Welcome to the BlogMD Initiative.

It is estimated that there are 1/2 million blogs on line at this stage and web analysts say a new blog is being added every 40 seconds. It is also known that every blog is as unique and as individualistic as the person who designed and writes it. With the explosion of blogging it is difficult to sift through them all and find potentially outstanding (unique|like minded) blogs. Because of this, blogs are clustering into like minded groups which is a normal social construct.

At present, numerous applications are available in the weblog world which provide interesting and useful methods of tracking weblogs and help users perform that vital sifting function. Some tools track when a weblog was last updated (weblogs.com) ; some track the most popular Internet links currently being pointed at by weblogs (Blogdex) and more recently, the Blogosphere Ecosystem at The Truth Laid Bear has tracked the links passing between weblogs (as does the similar, but more powerful Myelin Ecosystem.)

All of these applications are, at their core, doing the same thing. One way or another, they are gathering information about weblogs — metadata — storing it, analyzing it, and presenting their results on a web page.

The guiding principle behind the BlogMD initiative is that by creating standards in the weblog metadata “problem space”, we can enable greater collaboration and interaction between existing applications, as well as paving the way for future, currently unforeseen metadata applications by reducing or eliminating much of the redundant, “reinventing the wheel” work currently involved in creating a new weblog metadata application.

Effective immediately, the initiative is opening a web home here at TTLB. Here you’ll find background documentation on the project, and more importantly, a discussion board. We are inviting any and all interested weblog authors, readers, and application developers to come join in the discussion of the issues facing the project and to participate in the initiative as it moves forward.

The BlogMD founding board, responsible for driving the initiative forward, currently consists of:

N.Z. Bear of the Blogosphere Ecosystem and The Truth Laid Bear
Phillip Pearson of the Myelin Ecosystem
Dean Peters of blogs4God.com and healyourchurchwebsite.com

In addition, invitations have been sent to several additional individuals who have made significant contributions to the weblog world to join the board; we are awaiting their replies.

So what now? Take a look around. We suggest you read the FAQ or the Key Benefits document first. And then if you’re getting excited, read the Concept Doc for the complete, detailed view of the entire vision.

And then, we sincerely hope you’ll join us on the Forum, and join in the fun!

– The BlogMD Board

Posted by N.Z. Bear at August 26, 2002 12:00 AM

The BlogMD Initiative
http://www.truthlaidbear.com/blogmd/

Paid blogging hosting
http://blogspot.blogger.com/compare.pyra

MX says this is one Blog system to watch:
(If you currently use Movable Type or Greymatter you can import your existing data into pMachine)
pmachine.com/

Blogging and Journaling News
http://www.writerswrite.net/jlingaround.cfm

Think AI won’t happen – it will in some shape or form, sooner than you thought

Radio emerges from the electronic soup

19:00 28 August 02
Duncan Graham-Rowe

A self-organising electronic circuit has stunned engineers by turning itself into a radio receiver.

What should have been an oscillator became a radio
This accidental reinvention of the radio followed an experiment to see if an automated design process, that uses an evolutionary computer program, could be used to “breed” an electronic circuit called an oscillator. An oscillator produces a repetitive electronic signal, usually in the form of a sine wave.

Paul Layzell and Jon Bird at the University of Sussex in Brighton applied the program to a simple arrangement of transistors and found that an oscillating output did indeed evolve.

But when they looked more closely they found that, despite producing an oscillating signal, the circuit itself was not actually an oscillator. Instead, it was behaving more like a radio receiver, picking up a signal from a nearby computer and delivering it as an output.

In essence, the evolving circuit had cheated, relaying oscillations generated elsewhere, rather than generating its own.

Gene mixing

Layzell and Bird were using the software to control the connections between 10 transistors plugged into a circuit board that was fitted with programmable switches. The switches made it possible to connect the transistors differently.

Treating each switch as analogous to a gene allowed new circuits to evolve. Those that oscillated best were allowed to survive to a next generation. These “fittest” candidates were then mated by mixing their genes together, or mutated by making random changes to them.

After several thousand generations you end up with a clear winner, says Layzell. But precisely why the winner was a radio still mystifies them.

To pick up a radio signal you need other elements such as an antenna. After exhaustive testing they found that a long track in the circuit board had functioned as the antenna. But how the circuit “figured out” that this would work is not known.

“There’s probably one sudden key mutation that enabled radio frequencies to be picked up,” says Bird.

19:00 28 August 02
(c) New Scientist
http://www.newscientist.com/news/print.jsp?id=ns99992732

Free Culture – Lawrence Lessig Keynote from OSCON 2002

http://www.oreillynet.com/pub/a/policy/2002/08/15/lessig.html

Free Culture
Lawrence Lessig Keynote from OSCON 2002
by Lawrence Lessig

08/15/2002

Editor’s Note: In his address before a packed house at the Open Source Convention, Lawrence Lessig challenges the audience to get more involved in the political process. Lawrence, a tireless advocate for open source, is a professor of law at Stanford Law School and the founder of the school’s Center for Internet and Society. He is also the author of the best-selling book Code, and Other Laws of Cyberspace. Here is the complete transcript of Lawrence’s keynote presentation made on July 24, 2002.

(You can also download an MP3 version of this presentation (20.2MB).)

Lawrence Lessig: I have been doing this for about two years–more than 100 of these gigs. This is about the last one. One more and it’s over for me. So I figured I wanted to write a song to end it. But then I realized I don’t sing and I can’t write music. But I came up with the refrain, at least, right? This captures the point. If you understand this refrain, you’re gonna’ understand everything I want to say to you today. It has four parts:

*Creativity and innovation always builds on the past.

*The past always tries to control the creativity that builds upon it.

*Free societies enable the future by limiting this power of the past.

*Ours is less and less a free society.

In 1774, free culture was born. In a case called Donaldson v. Beckett in the House of Lords in England, free culture was made because copyright was stopped. In 1710, the statute had said that copyright should be for a limited term of just 14 years. But in the 1740s, when Scottish publishers started reprinting classics (you gotta’ love the Scots), the London publishers said “Stop!” They said, “Copyright is forever!” Sonny Bono said “Copyright should be forever minus a day,” but the London publishers said “Copyright is forever.”

These publishers, people whom Milton referred to as old patentees and monopolizers in the trade of book selling, men who do not labor in an honest profession (except Tim here), to [them”> learning is indebted. These publishers demanded a common-law copyright that would be forever. In 1769, in a case called Miller v. Taylor, they won their claim, but just five years later, in Donaldson, Miller was reversed, and for the first time in history, the works of Shakespeare were freed, freed from the control of a monopoly of publishers. Freed culture was the result of that case.

Remember the refrain. I would sing it, but you wouldn’t want me to. OK. Well, by the end we’ll see.

——————————————————————————–

“The O’Reilly conferences may, at first glance, look like other events you might have attended, where droning voices present canned speeches. Not so, though. The several conferences O’Reilly has launched and repeated in the last couple of years are uniformly considered to be summits, not conferences.”
– From GlennLogs by Glenn Fleishman

At O’Reilly we thrive on watching the “alpha geeks”, since the early adopters tell us a lot about the shape of the future. One of the most exciting things about our conferences is the opportunity for people to meet and share ideas and knowledge face to face on topics like:

· Mac OS X
· Bioinformatics
· Open Source
· Emerging Technology

O’Reilly Conferences

——————————————————————————–

That free culture was carried to America; that was our birth–1790. We established a regime that left creativity unregulated. Now it was unregulated because copyright law only covered “printing.” Copyright law did not control derivative work. And copyright law granted this protection for the limited time of 14 years.

That was our birth, and more fundamentally, in 1790, because of the technology of the time, all things protected were free code. You could take the works of Shakespeare and read the source–the source was the book. You could take the work of any creativity protected by the law and understand what made it tick [by”> studying it. This was the design and the regime, and even in the context of patents, there were transparent technologies. You didn’t take, you didn’t need to take the cotton gin [for example”> and read the patent to understand how it worked, right? You could just take it apart.

These were legal protections in a context where understanding and learning were still free. Control in this culture was tiny. That was cute, right? Control, tiny . . . OK. And not just then, right? Forget the 18th century, the 19th century, even at the birth of the 20th century. Here’s my favorite example, here: 1928, my hero, Walt Disney, created this extraordinary work, the birth of Mickey Mouse in the form of Steamboat Willie. But what you probably don’t recognize about Steamboat Willie and his emergence into Mickey Mouse is that in 1928, Walt Disney, to use the language of the Disney Corporation today, “stole” Willie from Buster Keaton’s “Steamboat Bill.”

It was a parody, a take-off; it was built upon Steamboat Bill. Steamboat Bill was produced in 1928, no [waiting”> 14 years–just take it, rip, mix, and burn, as he did [laughter”> to produce the Disney empire. This was his character. Walt always parroted feature-length mainstream films to produce the Disney empire, and we see the product of this. This is the Disney Corporation: taking works in the public domain, and not even in the public domain, and turning them into vastly greater, new creativity. They took the works of this guy, these guys, the Brothers Grimm, who you think are probably great authors on their own. They produce these horrible stories, these fairy tales, which anybody should keep their children far from because they’re utterly bloody and moralistic stories, and are not the sort of thing that children should see, but they were retold for us by the Disney Corporation. Now the Disney Corporation could do this because that culture lived in a commons, an intellectual commons, a cultural commons, where people could freely take and build. It was a lawyer-free zone.

(Audience Applauds.)

Related Resources:

Lawrence Lessig Home Page–Includes links to books (The Future of Ideas and Code and Other Laws of Cyberspace), articles, projects, and news.

An MP3 version of this presentation (20.2MB).

A flash version of Lessig’s presentation, including audio and other source files.

Creative Commons–A nonprofit organization founded on the notion that some people would prefer to share their creative works (and the power to copy, modify, and distribute their works) instead of exercising all of the restrictions of copyright law.

The Electronic Frontier Foundation (EFF)

O’Reilly Network Policy DevCenter

O’Reilly Open Source Convention Coverage

It was culture, which you didn’t need the permission of someone else to take and build upon. That was the character of creativity at the birth of the last century. It was built upon a constitutional requirement that protection be for limited times, and it was originally limited. Fourteen years, if the author lived, then 28, then in 1831 it went to 42, then in 1909 it went to 56, and then magically, starting in 1962, look–no hands, the term expands.

Eleven times in the last 40 years it has been extended for existing works–not just for new works that are going to be created, but existing works. The most recent is the Sonny Bono copyright term extension act. Those of us who love it know it as the Mickey Mouse protection act, which of course [means”> every time Mickey is about to pass through the public domain, copyright terms are extended. The meaning of this pattern is absolutely clear to those who pay to produce it. The meaning is: No one can do to the Disney Corporation what Walt Disney did to the Brothers Grimm. That though we had a culture where people could take and build upon what went before, that’s over. There is no such thing as the public domain in the minds of those who have produced these 11 extensions these last 40 years because now culture is owned.

Remember the refrain: We always build on the past; the past always tries to stop us. Freedom is about stopping the past, but we have lost that ideal.

Things are different now, [different”> from even when Walt produced the Walt Disney Corporation. In this year now, we have a massive system to regulate creativity. A massive system of lawyers regulating creativity as copyright law has expanded in unrecognizable forms, going from a regulation of publishing to a regulation of copying. You know the things that computers do when you boot them up? Going from copies to, not just copies of the original work, but even derivative works on top of it. Going from 14 years for new works produced by a real author–there are fewer and fewer of those people out there–to life plus 70 years. That’s the expansion of law, but also there’s been an expansion of control through technology.

OK, so first of all, this reality of opaque creativity, you know that as proprietary code. Creativity where you don’t get to see how the thing works, and the law protects the thing you can’t see. It’s not Shakespeare that you can study and understand because the code is, by nature, open. Nature has been reformed in our modern, technological era, so nature can be hidden and the law still protects it–and not just through the protection, but through increasing control of uses of creative work.

Here’s my Adobe eBook Reader, right. Some of you have seen this before, I’m sure. Here’s Middle March; this is a work in the public domain. Here are the “permissions” (a lawyer had something to do with this) that you can do with this work in the public domain: You are allowed to copy ten selections into the clipboard every ten days–like, who got these numbers, I don’t know–but you can print ten pages of this 4 million page book every ten days, and you are allowed to feel free to use the read-aloud button to listen to this book, right?

Now, Aristotle’s Politics, another book in the public domain [that was”> never really protected by copyright, but with this book, you can’t copy any text into the selection, you can’t print any pages, but feel free to listen to this book aloud. And to my great embarrassment, here’s my latest book, right? No copying, no printing, and don’t you dare use the technology to read my book aloud. [Laughter”> I’ll have a sing button in the next version of Adobe. Read a book; read a book.

The point is that control is built into the technology. Book sellers in 1760 had no conception of the power that you coders would give them some day in the future, and that control adds to this expansion of law. Law and technology produce, together, a kind of regulation of creativity we’ve not seen before. Right? Because here, here’s a simple copyright lesson: Law regulates copies. What’s that mean? Well, before the Internet, think of this as a world of all possible uses of a copyrighted work. Most of them are unregulated. Talking about fair use, this is not fair use; this is unregulated use. To read is not a fair use; it’s an unregulated use. To give it to someone is not a fair use; it’s unregulated. To sell it, to sleep on top of it, to do any of these things with this text is unregulated. Now, in the center of this unregulated use, there is a small bit of stuff regulated by the copyright law; for example, publishing the book–that’s regulated. And then within this small range of things regulated by copyright law, there’s this tiny band before the Internet of stuff we call fair use: Uses that otherwise would be regulated but that the law says you can engage in without the permission of anybody else. For example, quoting a text in another text–that’s a copy, but it’s a still fair use. That means the world was divided into three camps, not two: Unregulated uses, regulated uses that were fair use, and the quintessential copyright world. Three categories.

Enter the Internet. Every act is a copy, which means all of these unregulated uses disappear. Presumptively, everything you do on your machine on the network is a regulated use. And now it forces us into this tiny little category of arguing about, “What about the fair uses? What about the fair uses?” I will say the word: To hell with the fair uses. What about the unregulated uses we had of culture before this massive expansion of control? Now, unregulated uses disappear, we argue about fair use, and they find a way to remove fair use, right? Here’s a familiar creature to many of you, right? The wonderful Sony Aibo Pet, which you can teach to do all sorts of things. Somebody set up a wonderful aibopet.com site to teach people how to hack their dogs. Now remember, their dogs, right? And this site actually wanted to help you hack your dog to teach your dog to dance jazz. Remember (Europeans are sometimes confused about this), it’s not a crime to dance jazz in the United States.

This is a completely permissible activity–even for a dog to dance jazz. In Georgia, there are a couple jurisdictions I’m not sure about [laughter”>, but mainly, dancing jazz is an OK activity. So Aibopet.com said, “Here, here’s how to hack your dog to make it dance jazz.” If anything, it would be a fair use of this piece of plastic that costs over $1,500. You would think, “This is a fair use,” right?

Letter to the site: Your site contains information providing the means to circumvent Aibo, where copy protection protocol constitutes a violation of the anticircumvention provisions of the DMCA. Even though the use is fair use, the use is not permitted under the law. Fair use, erased by this combination of technological control and laws that say “don’t touch it,” leaving one thing left in this field that had three, controls copyright, [thereby”> controlling creativity.

Now, here’s the thing you’ve got to remember. You’ve got to see this. This is the point. (And Jack Valente misses this.) Here’s the point: Never has it been more controlled ever. Take the addition, the changes, the copyrights turn, take the changes to copyrights scope, put it against the background of an extraordinarily concentrated structure of media, and you produce the fact that never in our history have fewer people controlled more of the evolution of our culture. Never.

Not even before the birth of free culture, not in 1773 when copyrights were perpetual, because again, they only controlled printing. How many people had printers? You could do what you wanted with these works. Ordinary uses were completely unregulated. But today, your life is perpetually regulated in the world that you live in. It is controlled by the law. Here is the refrain: Creativity depends on stopping that control. They will always try to impose it; we are free to the extent that we resist it, but we are increasingly not free.

You or the GNU, you can pick, build a world of transparent creativity–that’s your job, this weird exception in the 21st century of an industry devoted to transparent creativity, the free sharing of knowledge. It was not a choice in 1790; it was nature in 1790. You are rebuilding nature. This is what you do. You build a common base that other people can build upon. You make money, not, well, not enough, but some of you make money off of this. This is your enterprise. Create like it’s 1790. That’s your way of being. And you remind the rest of the world of what it was like when creativity and innovation were a process where people added to common knowledge. In this battle between a proprietary structure and a free structure, you show the value of the free, and as announcements such as the RealNetworks announcement demonstrate, the free still captures the imagination of the most creative in this industry. But just for now. Just for now. Just for now, because free code threatens and the threats turn against free code.

Let’s talk about software patents. There’s a guy, Mr. Gates, who’s brilliant, right? He’s brilliant. A brilliant business man; he has some insights, he is even a brilliant policy maker. Here’s what he wrote about software patents: “If people had understood how patents would be granted when most of today’s ideas were invented and had taken out patents, the industry would be at a complete standstill today.” Here’s the first thing I’m sure you’ve read of Bill Gates that you all 100 percent agree with. Gates is right. He is absolutely right. Then we shift into the genius business man: “The solution is patenting as much as we can. A future startup with no patents of its own will be forced to pay whatever price the giants choose to impose. That price might be high. Established companies have an interest in excluding future competitors.” Excluding future competitors.

Now, it’s been four years since this battle came onto your radar screens in a way that people were upset about. Four years. And there have been tiny changes in the space. There have been a bunch of “Tim” changes, right? Tim went out there and he set up something to attack bad patents. That was fine. There were a bunch of Q. Todd Dickinson changes. He was a former head of the patent commission–never saw a patent he didn’t like. But he made some minor changes in how this process should work. But the field has been dominated by apologists for the status quo. Apologists who say, We’ve always patented everything, therefore we should continue to patent this. People like Greg Aharonian, who goes around and says every single patent out there is idiotic. But it turns out that the patent system’s wonderful and we should never reform it at all. Right?

This is the world we live in now, which produces this continued growth of software patents. And here’s the question: What have we done about it? What have you done about it? Excluding future competitors–that’s the slogan, right? And that company that gave birth to the slogan that I just cited has only ever used patents in a defensive way. But as Dan Gillmor has quoted, “They’ve also said, look, the Open Source Movement out there has got to realize that there are a lot of patents at stake, and don’t imagine we won’t use them when we must.”

Now, the thing about patents is, they’re not nuclear weapons. It’s not physics that makes them powerful, it’s lawyers and lawmakers and Congress. And the thing is, you can fight all you want against the physics that make a nuclear weapon destroy all of mankind, but you can not succeed at all. Yet you could do something about this. You could fuel a revolution that fights these legal threats to you. But what have you done about it? What have you done about it?

(Audience Applauds.)

Second, the copyright wars: In a certain sense, these are the Homeric tragedies. I mean this in a very modern sense. Here’s a story: There was a documentary filmmaker who was making a documentary film about education in America. And he’s shooting across this classroom with lots of people, kids, who are completely distracted at the television in the back of the classroom. When they get back to the editing room, they realize that on the television, you can barely make out the show for two seconds; it’s “The Simpsons,” Homer Simpson on the screen. So they call up Matt Groenig, who was a friend of the documentary filmmaker, and say, you know, Is this going to be a problem? It’s only a couple seconds. Matt says, No, no, no, it’s not going to be a problem, call so and so. So they called so and so, and so and so said call so and so.

Eventually, the so and so turns out to be the lawyers, so when they got to the lawyers, they said, Is this going to be a problem? It’s a documentary film. It’s about education. It’s a couple seconds. The so and so said 25,000 bucks. 25,000 bucks?! It’s a couple seconds! What do you mean 25,000 bucks? The so and so said, I don’t give a goddamn what it is for. $25,000 bucks or change your movie. Now you look at this and you say this is insane. It’s insane. And if it is only Hollywood that has to deal with this, OK, that’s fine. Let them be insane. The problem is their insane rules are now being applied to the whole world. This insanity of control is expanding as everything you do touches copyrights.

So, the broadcast flag, which says, “Before a technology is allowed to touch DTV, it must be architected to control DTV through watching for the broadcast flag.” Rebuild the network to make sure this bit of content is perfectly protected, or amend it for . . . chips that will be imposed on machines through the law, which Intel referred to as the police state in every computer, quite accurately. And they would build these computers, but are opposed to this police state system.

And then, most recently, this outrageous proposal that Congress ratify the rights of the copyright owners to launch a tax on P2P machines–malicious code that goes out there and tries to bring down P2P machines. Digital vigilantism. And not only are you allowed to sue if they do it and they shouldn’t have done it, but you have to go to the attorney general and get permission from the attorney general before you are allowed to sue about code that goes out there and destroys your machine . . . when it shouldn’t be allowed to destroy your machine. This is what they talk about in Washington. This is what they are doing. This is, as Jack Valente says, a terrorist war they are fighting against you and your children, the terrorists. Now you step back and you say, For what? Why? What’s the problem? And they say, It’s to stop the harm which you are doing.

So, what is that harm? What is the harm that is being done by these terrible P2P networks out there? Take their own numbers. They said last year [that”> five times the number of CDs sold were traded on the Net for free. Five times. Then take their numbers about the harm caused by five times the number sold being traded for free: A drop in sales of five percent. Five percent. Now, there was a recession last year, and they raised their prices and they changed the way they counted. All of those might actually account for the five percent, but even if they didn’t, the total harm caused by five times being traded for free was five percent. Now, I’m all for war in the right context, but is this the ground one stands on to call for a “terrorist war” against technology? This harm? Even if five percent gives them the right to destroy this industry, I mean, does anybody think about the decline in this industry, which is many times as large as theirs, caused by this terrorist war being launched against anybody who touches new content? Ask a venture capitalist how much money he is willing to invest in new technology that would touch content in a way that Hilary Rosen or Jack Valente don’t sign off on. The answer is a simple one: Zero. Zero.

They’ve shut down an industry and innovation in the name of this terrorist war, and this is the cause. This is the harm. Five percent.

And what have you done about it? It’s insane. It’s extreme. It’s controlled by political interests. It has no justification in the traditional values that justify legal regulation. And we’ve done nothing about it. We’re bigger than they are. We’ve got rights on our side. And we’ve done nothing about it. We let them control this debate. Here’s the refrain that leads to this: They win because we’ve done nothing to stop it.

There’s a congressmen: J.C. Watts. J.C. Watts is the only black member of the Republican Party in leadership. He’s going to resign from Congress. He’s been there seven and a half years. He’s had enough. Nobody can believe it. Nobody in Washington can believe it. Boy, not spend 700 years in Washington? He says, you know, I like you guys, but seven years is enough, eight years is too much. I’m out of here. Just about the time J.C. Watts came to Washington, this war on free code and free culture began. Just about that time.

In an interview two days ago, Watts said, Here’s the problem with Washington: “If you are explaining, you are losing.” If you are explaining, you’re losing. It’s a bumper sticker culture. People have to get it like that, and if they don’t, if it takes three seconds to make them understand, you’re off their radar screen. Three seconds to understand, or you lose. This is our problem. Six years after this battle began, we’re still explaining. We’re still explaining and we are losing. They frame this as a massive battle to stop theft, to protect property. They don’t get why rearchitecting the network destroys innovation and creativity. They extend copyrights perpetually. They don’t get how that in itself is a form of theft. A theft of our common culture. We have failed in getting them to see what the issues here are and that’s why we live in this place where a tradition speaks of freedom and their controls take it away.

Now, I’ve spent two years talking to you. To us. About this. And we’ve not done anything yet. A lot of energy building sites and blogs and Slashdot stories. [But”> nothing yet to change that vision in Washington. Because we hate Washington, right? Who would waste his time in Washington?

But if you don’t do something now, this freedom that you built, that you spend your life coding, this freedom will be taken away. Either by those who see you as a threat, who then invoke the system of law we call patents, or by those who take advantage of the extraordinary expansion of control that the law of copyright now gives them over innovation. Either of these two changes through law will produce a world where your freedom has been taken away. And, If You Can’t Fight For Your Freedom . . . You Don’t Deserve It.

But you’ve done nothing.

(Audience Applauds.)

There’s a handful, we can name them, of people you could be supporting, you could be taking. Let’s put this in perspective: How many people have given to EFF? OK. How many people have given to EFF more money than they have given to their local telecom to give them shitty DSL service? See? Four. How many people have given more money to EFF than they give each year to support the monopoly–to support the other side? How many people have given anything to these people, Boucher, Canon. . . . This is not a left and right issue. This is the important thing to recognize: This is not about conservatives versus liberals.

In our case, in Eldred [Eldred v. Ashcroft”>, we have this brief filed by 17 economists, including Milton Freedman, James Buchanan, Ronald Kost, Ken Arrow, you know, lunatics, right? Left-wing liberals, right? Freedman said he’d only join if the word “no-brainer” existed in the brief somewhere, like this was a complete no-brainer for him. This is not about left and right. This is about right and wrong. That’s what this battle is. These people are from the left and right. Hank Perritt, I think the grandfather of cyberspace–the law of cyberspace running in Illinois–is struggling to get support, to take this message to Washington. These are the sources, the places to go.

Then there is this organization. Now some of you say, I’m on the board of this organization. I fight many battles on that board. Some of you say we are too extreme; you say that in the wrong way, right? You send emails that say, “You are too extreme. You ought to be more mainstream.” You know and I am with you. I think EFF is great. It’s been the symbol. It’s fought the battles. But you know, it’s fought the battles in ways that sometimes need to be reformed. Help us. Don’t help us by whining. Help us by writing on the check you send in, “Please be more mainstream.” The check, right? This is the mentality you need to begin to adopt to change this battle. Because if you don’t do something now, then in another two years, somebody else will say, OK, two years is enough; I got to go back to my life. They’ll say again to you, Nothing’s changed. Except, your freedom, which has increasingly been taken away by those who recognize that the future is against them and they have the power in D.C. to protect themselves against that future. Free society be damned.

Thank you very much.

Lawrence Lessig is a Professor of Law at Stanford Law School and founder of the school’s Center for Internet and Society. Prior to joining the Stanford faculty, he was the Berkman Professor of Law at Harvard Law School. His book, Code, and Other Laws of Cyberspace, is published by Basic Books.

——————————————————————————–

Return to the O’Reilly Network.

oreillynet.com Copyright © 2000 O’Reilly & Associates, Inc.