K prgramming language

A Shallow Introduction to the K Programming Language (Columns)
By jjayson
Thu Nov 14th, 2002 at 05:58:07 AM EST

About two years ago I was introduced to a programming language that I really didn’t like: it didn’t have continuations, I didn’t see any objects, it had too many operators, it didn’t have a large community around it, it was strange and different, and it looked like line noise, like Perl, and I don’t like Perl. However, I gave it a try.

I had to learn that continuations may not be there, but first-class functions are; it may not have a normal object system, but that is because the language doesn’t need it and gets it power by cutting across objects; all the operators are the functions that make up its standard library; it’s community may not be large, but it is incredibly intelligent; it only looks strange until you understand its concepts; and well, it will always look like line noise, but you will stop caring because this also make the concise code easier to read. K has since become my language of choice.


http://www.kx.com/

Big Money for Cyber Security (US tax Dollars)

Big Money for Cyber Security (Technology)
By imrdkl
Wed Nov 13th, 2002 at 02:58:20 PM EST

This week, House Bill 3394, the Cyber Security Research and Development Act, passed in the Senate, and is now headed for the White House, where the President is expected to sign it without delay. Almost a billion dollars are allocated by the bill, for scholarships, grants and research on the topic of Cyber Security.

While much of the existing knowledge and many of the working implementations in this area have been developed over the years as part of existing Free Software implementations, the government has found that there simply is not enough funding, or talent, behind those efforts. They’re quite concerned about vulnerabilities in the critical infrastructure of the US, including telecommunications, transportation, water supply, and banking, as well as the electric power, natural gas, and petroleum production industries, all of which rely significantly upon computers and computer networks for their operation.

The bill itself may be studied at the Library of Congress, using their search engine, or directly. This article will present an overview of the exciting and profitable opportunities which will soon be available to researchers with an interest in Cyber Security.

——————————————————————————–

Some of the other important findings of the bill include:

The US is not prepared for coordinated cyber attacks which may result from war
Federal investment in computer and network security research must be increased to decrease vulnerability, expand and improve the “pool” of knowledge, and better coordinate sharing and collaboration.

African-Americans, Hispanics, and Native Americans comprise less than 7 percent of the information science workforce, and this number should be increased.

I consider the second finding particularly interesting. Given the history of security research, when the bill finds that better sharing and collaboration is necessary, one might conclude that the government intends to support the continued and expanded efforts of Open Source software, to accomplish the task. While there are certainly closed implementations for security, it’s just “commonsensical” to put the money behind the open and freely-available efforts which are already shared, and collaborated upon.

In general, the National Science Foundation (NSF), which will be the director of the foundation which distributes the funds, will be directed to award monies for research and study on the following topics, during the next five years:

authentication, cryptography, and other secure data communications technology
computer forensics and intrusion detection
reliability of computer and network applications, middleware, operating systems, control systems, and communications infrastructure
privacy and confidentiality
network security architecture, including tools for security administration and analysis
emerging threats
vulnerability assessments and techniques for quantifying risk;
remote access and wireless security
enhancement of law enforcement ability to detect, investigate, and prosecute cyber-crimes, including those that involve piracy of intellectual property.
Now, that’s certainly a broad list. It introduces significant possibilities for improving and enhancing existing implementations, as well as finding new and improved techniques. The applications which will be considered are to be evaluated on a “merit” basis, and may be undertaken by universities and other non-profit institutions, as well as partnerships between one or more of these institutions along with for-profit entities and/or government institutions.

Criteria for acceptance of any proposal submitted will be based upon:

the ability of the applicant to generate innovative approaches
the experience of the applicant in conducting research
the capacity of the applicant to attract and provide adequate support
the extent to which the applicant will partner with government laboratories, for-profit entities, other institutions of higher education, or nonprofit research institutions, and the role the partners will play in the research undertaken by the Center.
It seems a fair question to ask, why is the amount of “partnership” important? If the end result of the research is to be “shared and collaborated”, then perhaps the amount of partnership is not so critical as the first three criteria. In any case, there’s soon to be a lot of new money for study and work related to computer security. The application process itself, while not yet established, has provisions for each of the distinct topics mentioned previously, both for graduate study and training, as well as undergraduate internships and programs.

Have you an interest in Cyber Security? What programs or software could be improved, and how would such a large capital infusion for research affect these projects? What are the political ramifications of the government getting involved with the projects, either directly or indirectly? And what about the shortage of minorities in the profession? What can be done to encourage young people in general, and African-Americans, Hispanics, and Native-Americans in particular to study and learn about Cyber Security?

Other Coverage: UPI, InfoWorld and GovExec

Nickel Exchange: P2P Micropayments

Nickel Exchange: P2P Micropayments (MLP)
By higinx
Tue Nov 12th, 2002 at 12:14:34 PM EST

Many companies have tried to implement micropayment solutions before, but none have really succeeded. The Nickel Exchange introduces a completely new approach to micropayments that tries to address the flaws we’ve seen in previous systems. And best of all, it’s a free service.

http://www.ginx.com/nx/

Just checked the site July 14th 2003
They’ve paused the site ATM due to no one reaching 100 units 🙁

Next best that I can see is :

http://www.centipaid.com/

July 16th 2003
http://www.bitpass.com/

July 22
http://www.amazon.com/webservices/

“We are almost ready to kick off the beta for our payment system. The payment system will allow visitors to your site to use their Amazon account to pay you for any product or service. You can also offer subscriptions and controlled access to content. You will be able to verify the status of any transaction to make sure that the user has not rescinded it. We will provide you with a base-level API and you can construct your business logic on top.”

http://www.dashes.com/anil/index.php?archives/006765.php

Blogs referral marketing – bastards

Spam meets blogs (MLP)
By kpaul
Mon Oct 28th, 2002 at 07:33:29 AM EST

Michelle Delio at Wired has an interesting article (When the Spam Hits the Blogs). In it, she explores another somewhat new phenomena in the blogosphere. According to the article, spammers have begun hitting sites furiously to get links on a lot of sites’ backlink lists.
….

When the Spam Hits the Blogs – Michelle Delio – Wired original article.

How to present ReferrerLinking on your web site

Sendo ditches closed source Micky$oft

http://www.sendo.co.uk/news/newsitem.asp?ID=61

SENDO CHOOSES NOKIA’S SERIES 60 PLATFORM FOR ITS SMART PHONES
Thu Nov 7 2002
Sendo, a British mobile phone manufacturer, today announced that the company has decided to license Series 60 Platform from Nokia for its smart phone category. The Series 60 is a software platform for feature- and application rich smart phones that Nokia licenses to mobile handset manufacturers. The platform is optimised to run on top of the Symbian OS. Sendo joins as the newest member to the Series 60 licensing community with Matsushita, Samsung, Siemens and Nokia.

“Earlier this fall we reviewed our smart phone strategy. While our mission of providing customers with feature-rich and ubiquitous devices remains unaltered, seeing that the Series 60 fully embraces both our mission and the new strategy we decided to approach Nokia,” said Hugh Brogan, Chief Executive Officer of Sendo Holdings Plc. “The platform utilises open standards and technologies, such as MMS and Java , jointly developed by the industry. The platform is robust, yet uniquely flexible, bringing great benefits to licensees, operators, developers and consumers.”

“We welcome Sendo, a pioneer in smart phone development, to join our Series 60 community. We see that a combination of Sendo’s technical expertise and growing market presence will bring significant contribution to the mobile market with Series 60 devices. Interoperable solutions that are built on open and common industry standards are proving to be the winning formula in meeting demands of business users and consumers alike,” said Niklas Savander, Vice President and General Manager, Nokia Mobile Software.

Nokia licenses Series 60 Platform as a source code. The model enables licensees to contribute to the development of the platform while fully executing their individual business strategy, brand and customer requirements in fast developing and highly competitive mobile communications market. Licensees will be able to include the Series 60 into their own smart phone designs, thus speeding up the rollout of new phone models at lower costs.

The Series 60 is a comprehensive software platform for smart phones, created for mobile phone users that demand easy-to-use, one-hand operated handsets with high-quality colour screens, rich communications and enhanced applications. The Series 60 platform consists of the key telephony and personal information management applications, the browser and messaging clients, as well as a complete and modifiable user interface, all designed to run on top of the Symbian OS, an operating system for advanced, data enabled mobile phones.

——————————————————————————–

For further information, please contact

Marijke van Hooren
Sendo
Phone:+44 (0) 121 251 5060
Mobile:+44 (0) 7968 820 701
[email”>mvanhooren@sendo.com[/email”>

Nokia Corporate Communications (Americas)
Phone:+1 972 894 4875

Nokia Mobile Software Communications
Phone:+358 7 180 08000
[email”>nokia.mobile.phones@nokia.com[/email”>
www.nokia.com

——————————————————————————–

About Sendo
Sendo, headquartered in the United Kingdom, started shipping its first terminals to operator customers in Europe and Asia in May 2001. The company is now shipping five products in over twenty countries in Europe and Asia, with the USA soon to follow. Sendo offers high-performance, competitively priced, reliable products and services to the cellular market. Sendo has been established with the needs of the wireless carriers and consumers in mind. The company offers a complete custom program, from exclusively branded phones, matched fulfillment programs and software with dedicated services. Details of the company are available at www.sendo.com

About Nokia
Nokia is the world leader in mobile communications. Backed by its experience, innovation, user-friendliness and reliable solutions, the company has become the leading supplier of mobile phones and a leading supplier of mobile, fixed broadband and IP networks. By adding mobility to the Internet Nokia creates new opportunities for companies and further enriches the daily lives of people. Nokia is a broadly held company with listings on six major exchanges.

Real Hacking Rules! (What Is the Essence of Hacking?)

Real Hacking Rules!
Or, Before the Word is Totally Useless, What Is the Essence of Hacking?
by Richard Thieme
10/04/2002
http://www.oreillynet.com/pub/a/network/2002/10/04/hackers.html

On the tenth anniversary of Def Con, the annual Las Vegas meeting of computer hackers, security professionals, and others, I reflected on how the con–and hacking–had changed since I spoke at Def Con 4 seven years earlier.

The word hacker today means everything from digging into a system–any system–at root level to defacing a Web site with graffiti. Because we have to define what we mean whenever we use the term, the word is lost to common usage. Still, post 9/11 and the Patriot Act, it behooves hackers of any definition to be keenly aware of the ends to which they hack. Hackers must know their roots and know how to return to “root” when necessary.

At Def Con 4 I said that hacking was practice for transplanetary life in the 21st century. I was right. The skills I foresaw as essential just a short generation ahead have indeed been developed by the best of the hacker community, who helped to create–and secure–the Net that is now ubiquitous. But the game of building and cracking security, managing multiple identities, and obsessing over solving puzzles is played now on a ten-dimensional chess board. Morphing boundaries at every level of organizational structure have created a new game.

In essence, hacking is a way of thinking about complex systems. It includes the skills required to cobble together seemingly disparate pieces of a puzzle in order to understand the system; whether modules of code or pieces of a bigger societal puzzle, hackers intuitively grasp and look for the bigger picture that makes sense of the parts. So defined, hacking is a high calling. Hacking includes defining and defending identity, creating safe boundaries, and searching for the larger truth in a maze of confusion and intentional disinformation.

In the national security state that has evolved since World War II, hacking is one means by which a free people can retain freedom. Hacking includes the means and methodologies by which we construct more comprehensive truths or images of the systems we hack.

Hackers cross disciplinary lines. In addition to computer hackers, forensic accountants (whistleblowers, really), investigative journalists (“conspiracy theorists”), even shamans are hackers because hacking means hacking both the system and the mind that made it. That’s why, when you finally understand Linux, you understand … everything.

The more complex the system, the more challenging the puzzles, the more exhilarating the quest. Edward O. Wilson said in Consilience that great scientists are characterized by a passion for knowledge, obsessiveness, and daring.

Real hackers too.

The Cold War mentality drew the geopolitical map of the world as opposing alliances; now the map is more complex, defining fluid alliances in terms of non-state actors, narcotics/weapons-traffickers, and incendiary terrorist cells. Still, the game is the same: America sees itself as a huge bulls-eye always on the defensive.

In this interpretation, the mind of society is both target and weapon and the management of perception–from deception and psychological operations to propaganda, spin, and public relations–is its cornerstone.

That means that the modules of truth that must be connected to form the bigger picture are often exchanged in a black market. The machinery of that black market is hacking.

Here’s an example:

A colleague was called by a source after a major blackout in the Pacific Northwest. The source claimed that the official explanation for the blackout was bogus. Instead, he suggested, a non-state aggressor such as a narco-terrorist had probably provided a demonstration of power, attacking the electric grid as a show of force.

“The proof will come,” he said, “if it happens again in a few days.”

A few days later, another blackout hit the area.

Fast-forward to a security conference at which an Army officer and I began chatting. One of his stories made him really chuckle.

“We were in the desert,” he said, “testing an electromagnetic weapon. It was high-level stuff. We needed a phone call from the Secretary of Defense to hit the switch. When we did, we turned out the lights all over the Pacific Northwest.” He added, “Just to be sure, we did it again a few days later and it happened again.”

That story is a metaphor for life in a national security state.

That test took place in a secured area that was, in effect, an entire canyon. Cover stories were prepared for people who might wander in, cover stories for every level of clearance, so each narrative would fuse seamlessly with how different people “constructed reality.”

The journalistic source was correct in knowing that the official story didn’t account for the details. He knew it was false but didn’t know what was true. In the absence of truth, we make it up. Only when we have the real data, including the way the data has been rewritten to obscure the truth, can we know what is happening.

That’s hacking on a societal level. Hacking is knowing how to discern or retrieve information beyond that which is designed for official consumption. It is abstract thinking at the highest level, practical knowledge of what’s likely, or might, or must be true, if this little piece is true, informed by an intuition so tutored over time it looks like magic.

Post 9/11, the distinction between youthful adventuring and reconstituting the bigger picture on behalf of the greater good is critical. What was trivial mischief that once got a slap on the wrist is now an act of terrorism, setting up a teenager for a long prison term. The advent of global terrorism and the beginning of the Third World War have changed the name of the game.

Yet without checks and balances, we will go too far in the other direction. The FBI in Boston is currently notorious for imprisoning innocent men to protect criminal allies. I would guess that the agents who initiated that strategy had good intentions. But good intentions go awry. Without transparency, there is no truth. Without truth, there is no accountability. Without accountability, there is no justice.

Hacking ensures transparency. Hacking is about being free in a world in which we understand that we will never be totally free.

Nevertheless, hackers must roll the boulder up the hill. They have no choice but to be who they are. But they must understand the context in which they work and the seriousness of the consequences when they don’t.

Hackers must, as the Good Book says, be wise as serpents and innocent as doves.

Richard Thieme is a business consultant, writer, and professional speaker focused on “life on the edge,” in particular the human dimension of technology and the work place, change management and organizational effectiveness.

the next big thang… – gentoo linux?

http://www.gentoo.org/

O’Reilly art
In the article, a brief overview of some of the features of Gentoo Linux, Daniel expounds on what kinds of enhancements users can expect in the 1.4 final release: support for true 64-bit on the UltraSparc architecture, KDE 3.0.4, a gentoo-sources kernel with Andrea Archangeli’s 3.5GB “user address space” patch and grsec, and of course the new Gentoo Reference Platform for fast binary installs.

Googling Your Email by Jon Udell

Googling Your Email
by Jon Udell
10/07/2002
http://www.oreillynet.com/pub/a/network/2002/10/07/udell.html

Someday we’ll tell our grandchildren about those moments of epiphany, back in the last century, when we first glimpsed how the Web would change our relationship to the world. For me, one of those moments came when I was looking for an ODBC driver kit that I knew was on a CD somewhere in my office. After rifling through my piles of clutter to no avail, I tried rifling through AltaVista’s index. Bingo! Downloading those couple of megabytes over our 56K leased line to the Internet was, to be sure, way slower than my CD-ROM drive’s transfer rate would have been, but since I couldn’t lay my hands on the CD, it was a moot point. Through AltaVista I could find, and then possess, things that I already possessed but could not find.

There began an odd inversion that continues to the present day. Any data that’s public, and that Google can see, is hardly worth storing and organizing. We simply search for what we need, when we need it: just-in-time information management. But since we don’t admit Google to our private data stores — Intranets [1″> and mailboxes, for example — we’re still like the shoemaker’s barefoot children. Most of us can find all sorts of obscure things more easily than we can find the file that Tom sent Leslie last week.

What would it be like to Google your email? Raphaël Szwarc’s ZOË is a clever piece of software that explores this idea. It’s written in Java (source available), so it can be debugged and run everywhere. ZOË is implemented as a collection of services. Startup is as simple as unpacking the zipped tarball and launching ZOË.jar. The services that fire up include a local Web server that handles the browser-based UI, a text indexing engine, a POP client and server, and an SMTP server.

Because ZOË has a Web-style architecture, you can use it remotely as well as locally. At the moment, for example, I’m running ZOË on a Mac OS X box in my office, but browsing into it from my wirelessly connected laptop outside. I wouldn’t recommend this, however, since ZOË’s Web server has no access controls in place. By contrast, Radio Userland — also a local, Web-server-based application, which I’m currently running on a Windows XP box in my office and browsing into remotely — does offer HTTP basic authentication, though not over SSL. In the WiFi era, you have to be aware of which local services are truly local.

ZOË doesn’t aim to replace your email client, but rather to proxy your mail traffic and build useful search and navigation mechanisms. At the moment, I’m using ZOË together with Outlook (on Windows XP) and Entourage (on MacOSX). ZOË’s POP client sucks down and indexes my incoming mail in parallel with my regular clients. (I leave a cache of messages on the server so the clients don’t step on one another.) By routing my outbound mail through ZOË’s SMTP server, it gets to capture and index that as well. Here’s a typical search result.

[see original web site screen shot”>

ZOË helps by contextualizing the results, then extracting and listing Contributors (the message senders), Attachments, and Links (such as the URL strings found in the messages). These context items are all hyperlinks. Clicking “Doug Dineley” produces the set of messages from Doug, like so:

Following Weblog convention, the # sign preceding Doug’s name is a permalink. It assigns a URL to the query “find all of Doug’s messages,” so you can bookmark it or save it on the desktop.

Note also the breadcrumb trail that ZOË has built:

ZOË -> Com -> InfoWorld

These are links too, and they lead to directories that ZOË has automatically built. Here’s the view after clicking the InfoWorld link:

[see original web site screen shot”>

Nice! Along with the directory of names, ZOË has organized all of the URLs that appear in my InfoWorld-related messages. This would be even more interesting if those URLs were named descriptively, but of course, that’s a hard thing to do. Alternatively, ZOË could spider those URLs and produce a view offering contextual summaries of them. We don’t normally think of desktop applications doing things like that, but ZOË (like Google) is really a service, working all the time, toiling in ways that computers should and people shouldn’t.

When we talk about distributed Web services, we ought not lose sight of the ones that run on our own machines, and have access to our private data. ZOË reminds us how powerful these personal services can be. It also invites us to imagine even richer uses for them.

Fast, fulltext search, for example, is only part of the value that ZOË adds. Equally useful is the context it supplies. That, of course, relies on the standard metadata items available in email: Subject, Date, From. Like all mail archivers, ZOË tries to group messages into threads, and like all of them, it is limited by the unfortunate failure of mail clients to use References or In-Reply-To headers in a consistent way. Threading, therefore, depends on matching the text of Subject headers and sacrifices a lot of useful context.

For years, I’ve hoped email clients would begin to support custom metadata tags that would enable more robust contextualization — even better than accurate threading would provide. My working life is organized around projects, and every project has associated with it a set of email messages. In Outlook, I use filtering and folders to organize messages by project. Unfortunately, there’s no way to reuse that effort. The structure I impose on my mail store cannot be shared with other software, or with other people. Neither can the filtering rules that help me maintain that structure. This is crazy! We need to start to think of desktop applications not only as consumers of services, but also as producers of them. If Outlook’s filters were Web services, for example, then ZOË — running on the same or another machine — could make use of them.

Services could flow in the other direction, too. For example, ZOË spends a lot of time doing textual analysis of email. Most of the correlations I perform manually, using Outlook folders, could be inferred by a hypothetical version of ZOË that would group messages based on matching content in their bodies as well as in their headers, then generate titles for these groups by summarizing them. There should be no need for Outlook to duplicate these structures. ZOË could simply offer them as a metadata feed, just as it currently offers an RSS feed that summarizes the current day’s messages.

At InfoWorld’s recent Web services conference, Google’s cofounder Sergey Brin gave a keynote talk. Afterward, somebody asked him to weigh in on RDF and the semantic Web. “Look,” he said, “putting angle brackets around everything is not a technology, by itself. I’d rather make progress by having computers understand what humans write, than to force humans to write in ways computers can understand.” I’ve always thought that we need to find more and better ways to capture metadata when we communicate. But I’ve got to admit that the filtering and folders I use in Outlook require more effort than most people will ever be willing to invest. There may yet turn out to be ways to make writing the semantic Web easy and natural. Meanwhile, Google and, now, ZOË remind us that we can still add plenty of value to the poorly-structured stuff that we write every day. It’s a brute-force strategy, to be sure, but isn’t that why we have these 2GHz personal computers?

Jon Udell is lead analyst for the InfoWorld Test Center.
——————————————————————————–

1 Users of the Google Search Appliance do, of course, invite Google behind the firewall.

The Case Against Micropayments

The Case Against Micropayments
by Clay Shirky
12/19/2000
http://www.openp2p.com/pub/a/p2p/2000/12/19/micropayments.html

Micropayments are back, at least in theory, thanks to P2P. Micropayments are an idea with a long history and a disputed definition – as the W3C micropayment working group puts it, ” … there is no clear definition of a ‘Web micropayment’ that encompasses all systems,” but in its broadest definition, the word micropayment refers to “low-value electronic financial transactions.”

P2P creates two problems that micropayments seem ideally suited to solve. The first is the need to reward creators of text, graphics, music or video without the overhead of publishing middlemen or the necessity to charge high prices. The success of music-sharing systems such as Napster and Audiogalaxy, and the growth of more general platforms for file sharing such as Gnutella, Freenet and AIMster, make this problem urgent.

The other, more general P2P problem micropayments seem to solve is the need for efficient markets. Proponents believe that micropayments are ideal not just for paying artists and musicians, but for providers of any resource – spare cycles, spare disk space, and so on. Accordingly, micropayments are a necessary precondition for the efficient use of distributed resources.

Jakob Nielsen, in his essay The Case for Micropayments writes, “I predict that most sites that are not financed through traditional product sales will move to micropayments in less than two years,” and Nicholas Negroponte makes an even shorter-term prediction: “You’re going to see within the next year an extraordinary movement on the Web of systems for micropayment … .” He goes on to predict micropayment revenues in the tens or hundreds of billions of dollars.

Alas for micropayments, both of these predictions were made in 1998. (In 1999, Nielsen reiterated his position, saying, “I now finally believe that the first wave of micropayment services will hit in 2000.”) And here it is, the end of 2000. Not only did we not get the flying cars, we didn’t get micropayments either. What happened?

Micropayments: An Idea Whose Time Has Gone
Micropayment systems have not failed because of poor implementation; they have failed because they are a bad idea. Furthermore, since their weakness is systemic, they will continue to fail in the future.

Proponents of micropayments often argue that the real world demonstrates user acceptance: Micropayments are used in a number of household utilities such as electricity, gas, and most germanely telecom services like long distance.

These arguments run aground on the historical record. There have been a number of attempts to implement micropayments, and they have not caught on in even in a modest fashion – a partial list of floundering or failed systems includes FirstVirtual, Cybercoin, Millicent, Digicash, Internet Dollar, Pay2See, MicroMint and Cybercent. If there was going to be broad user support, we would have seen some glimmer of it by now.

Furthermore, businesses like the gas company and the phone company that use micropayments offline share one characteristic: They are all monopolies or cartels. In situations where there is real competition, providers are usually forced to drop “pay as you go” schemes in response to user preference, because if they don’t, anyone who can offer flat-rate pricing becomes the market leader. (See sidebar: “Simplicity in pricing.”) Simplicity in pricing

The historical record for user preferences in telecom has been particularly clear. In Andrew Odlyzko’s seminal work, The history of communications and its implications for the Internet, he puts it this way:

“There are repeating patterns in the histories of communication technologies, including ordinary mail, the telegraph, the telephone, and the Internet. In particular, the typical story for each service is that quality rises, prices decrease, and usage increases to produce increased total revenues. At the same time, prices become simpler.

“The historical analogies of this paper suggest that the Internet will evolve in a similar way, towards simplicity. The schemes that aim to provide differentiated service levels and sophisticated pricing schemes are unlikely to be widely adopted.”

Why have micropayments failed? There’s a short answer and a long one. The short answer captures micropayment’s fatal weakness; the long one just provides additional detail.

The Short Answer for Why Micropayments Fail
Users hate them.

The Long Answer for Why Micropayments Fail
Why does it matter that users hate micropayments? Because users are the ones with the money, and micropayments do not take user preferences into account.

In particular, users want predictable and simple pricing. Micropayments, meanwhile, waste the users’ mental effort in order to conserve cheap resources, by creating many tiny, unpredictable transactions. Micropayments thus create in the mind of the user both anxiety and confusion, characteristics that users have not heretofore been known to actively seek out.

Anxiety and the Double-Standard of Decision Making
Many people working on micropayments emphasize the need for simplicity in the implementation. Indeed, the W3C is working on a micropayment system embedded within a link itself, an attempt to make the decision to purchase almost literally a no-brainer.

Embedding the micropayment into the link would seem to take the intrusiveness of the micropayment to an absolute minimum, but in fact it creates a double-standard. A transaction can’t be worth so much as to require a decision but worth so little that that decision is automatic. There is a certain amount of anxiety involved in any decision to buy, no matter how small, and it derives not from the interface used or the time required, but from the very act of deciding.

Micropayments, like all payments, require a comparison: “Is this much of X worth that much of Y?” There is a minimum mental transaction cost created by this fact that cannot be optimized away, because the only transaction a user will be willing to approve with no thought will be one that costs them nothing, which is no transaction at all.

Thus the anxiety of buying is a permanent feature of micropayment systems, since economic decisions are made on the margin – not, “Is a drink worth a dollar?” but, “Is the next drink worth the next dollar?” Anything that requires the user to approve a transaction creates this anxiety, no matter what the mechanism for deciding or paying is.

The desired state for micropayments – “Get the user to authorize payment without creating any overhead” – can thus never be achieved, because the anxiety of decision making creates overhead. No matter how simple the interface is, there will always be transactions too small to be worth the hassle.

Confusion and the Double-Standard of Value
Even accepting the anxiety of deciding as a permanent feature of commerce, micropayments would still seem to have an advantage over larger payments, since the cost of the transaction is so low. Who could haggle over a penny’s worth of content? After all, people routinely leave extra pennies in a jar by the cashier. Surely amounts this small makes valuing a micropayment transaction effortless?

Here again micropayments create a double-standard. One cannot tell users that they need to place a monetary value on something while also suggesting that the fee charged is functionally zero. This creates confusion – if the message to the user is that paying a penny for something makes it effectively free, then why isn’t it actually free? Alternatively, if the user is being forced to assent to a debit, how can they behave as if they are not spending money?

Beneath a certain price, goods or services become harder to value, not easier, because the X for Y comparison becomes more confusing, not less. Users have no trouble deciding whether a $1 newspaper is worthwhile – did it interest you, did it keep you from getting bored, did reading it let you sound up to date – but how could you decide whether each part of the newspaper is worth a penny?

Was each of 100 individual stories in the newspaper worth a penny, even though you didn’t read all of them? Was each of the 25 stories you read worth 4 cents apiece? If you read a story halfway through, was it worth half what a full story was worth? And so on.

When you disaggregate a newspaper, it becomes harder to value, not easier. By accepting that different people will find different things interesting, and by rolling all of those things together, a newspaper achieves what micropayments cannot: clarity in pricing.

The very micro-ness of micropayments makes them confusing. At the very least, users will be persistently puzzled over the conflicting messages of “This is worth so much you have to decide whether to buy it or not” and “This is worth so little that it has virtually no cost to you.”

User Preferences
Micropayment advocates mistakenly believe that efficient allocation of resources is the purpose of markets. Efficiency is a byproduct of market systems, not their goal. The reasons markets work are not because users have embraced efficiency but because markets are the best place to allow users to maximize their preferences, and very often their preferences are not for conservation of cheap resources.

Imagine you are moving and need to buy cardboard boxes. Now you could go and measure the height, width, and depth of every object in your house – every book, every fork, every shoe – and then create 3D models of how these objects could be most densely packed into cardboard boxes, and only then buy the actual boxes. This would allow you to use the minimum number of boxes.

But you don’t care about cardboard boxes, you care about moving, so spending time and effort to calculate the exact number of boxes conserves boxes but wastes time. Furthermore, you know that having one box too many is not nearly as bad as having one box too few, so you will be willing to guess how many boxes you will need, and then pad the number.

For low-cost items, in other words, you are willing to overpay for cheap resources, in order to have a system that maximizes other, more important, preferences. Micropayment systems, by contrast, typically treat cheap resources (content, cycles, disk) as precious commodities, while treating the user’s time as if were so abundant as to be free.

Micropayments Are Just Payments
Neither the difficulties posed by mental transaction costs nor the the historical record of user demand for simple, predictable pricing offers much hope for micropayments. In fact, as happened with earlier experiments attempting to replace cash with “smart cards,” a new form of financial infrastructure turned out to be unnecessary when the existing infrastructure proved flexible enough to be modified. Smart cards as cash replacements failed because the existing credit card infrastructure was extended to include both debit cards and ubiquitous card-reading terminals.

So it is with micropayments. The closest thing we have to functioning micropayment systems, Qpass and Paypal, are simply new interfaces to the existing credit card infrastructure. These services do not lower mental transaction costs nor do they make it any easier for a user to value a penny’s worth of anything – they simply make it possible for users to spend their money once they’ve decided to.

Micropayment systems are simply payment systems, and the size and frequency of the average purchase will be set by the user’s willingness to spend, not by special infrastructure or interfaces. There is no magic bullet – only payment systems that work within user expectations can succeed, and users will not tolerate many tiny payments.

Old Solutions
This still leaves the problems that micropayments were meant to solve. How to balance users’ strong preference for simple pricing with the enormous number of cheap, but not free, things available on the Net?

Micropayment advocates often act as if this is a problem particular to the Internet, but the real world abounds with items of vanishingly small value: a single stick of gum, a single newspaper article, a single day’s rent. There are three principal solutions to this problem offline – aggregation, subscription, and subsidy – that are used individually or in combination. It is these same solutions – and not micropayments – that are likely to prevail online as well.

Aggregation
Aggregation follows the newspaper example earlier – gather together a large number of low-value things, and bundle them into a single higher-value transaction.

Call this the “Disneyland” pricing model – entrance to the park costs money, and all the rides are free. Likewise, the newspaper has a single cost, that, once paid, gives the user free access to all the stories.

Aggregation also smoothes out the differences in preferences. Imagine a newspaper sold in three separate sections – news, business, and sports. Now imagine that Curly would pay a nickel to get the news section, a dime for business, and a dime for sports; Moe would pay a dime each for news and business but only a nickel for sports; and Larry would pay a dime, a nickel, and a dime.

If the newspaper charges a nickel a section, each man will buy all three sections, for 15 cents. If it prices each section at a dime, each man will opt out of one section, paying a total of 20 cents. If the newspaper aggregates all three sections together, however, Curly, Moe and Larry will all agree to pay 25 cents for the whole, even though they value the parts differently.

Aggregation thus not only lowers the mental transaction costs associated with micropayments by bundling several purchase decisions together, it creates economic efficiencies unavailable in a world where each resource is priced separately.

Subscription
A subscription is a way of bundling diverse materials together over a set period, in return for a set fee from the user. As the newspaper example demonstrates, aggregation and subscription can work together for the same bundle of assets.

Subscription is more than just aggregation in time. Money’s value is variable – $100 today is better than $100 a month from now. Furthermore, producers value predictability no less than consumers, so producers are often willing to trade lower subscription prices in return for lump sum payments and more predictable revenue stream.

Long-term incentives

Game theory fans will recognize subscription arrangements as an Iterated Prisoner’s Dilemma, where the producer’s incentive to ship substandard product or the consumer’s to take resources without paying is dampened by the repetition of delivery and payment.

Subscription also serves as a reputation management system. Because producer and consumer are more known to one another in a subscription arrangement than in one-off purchases, and because the consumer expects steady production from the producer, while the producer hopes for renewed subscriptions from the consumer, both sides have an incentive to live up to their part of the bargain, as a way of creating long-term value. (See sidebar: “Long-term incentives”.)

Subsidy
Subsidy is by far the most common form of pricing for the resources micropayments were meant to target. Subsidy is simply getting someone other than the audience to offset costs. Again, the newspaper example shows that subsidy can exist alongside aggregation and subscription, since the advertisers subsidize most, and in some cases all, of a newspaper’s costs. Advertising subsidy is the normal form of revenue for most Web sites offering content.

The biggest source of subsidy on the Net overall, however, is from the the users themselves. The weblog movement, where users generate daily logs of their thoughts and interests, is typically user subsidized – both the time and the resources needed to generate and distribute the content are donated by the user as a labor of love.

Indeed, even as the micropayment movement imagines a world where charging for resources becomes easy enough to spawn a new class of professionals, what seems to be happening is that the resources are becoming cheap enough to allow amateurs to easily subsidize their own work.

Against users’ distaste for micropayments, the tools of aggregation, subscription and subsidy will be the principle tools for bridging the gap between atomized resources and demand for simple, predictable pricing.

Playing by the Users’ Rules
Micropayment proponents have long suggested that micropayments will work because it would be great if they did. A functioning micropayment system would solve several thorny financial problems all at once. Unfortunately, the barriers to micropayments are not problems of technology and interface, but user approval. The advantage of micropayment systems to people receiving micropayments is clear, but the value to users whose money and time is involved isn’t.

Because of transactional inefficiencies, user resistance, and the increasing flexibility of the existing financial framework, micropayments will never become a general class of network application. Anyone setting out to build systems that reward resource providers will have to create payment systems that provides users the kind of financial experience they demand – simple, predictable and easily valued. Only solutions that play by these rules will succeed.

——————————————————————————–

Clay Shirky is a Partner at The Accelerator Group. He writes extensively about the social and economic effects of the internet for the O’Reilly Network, Business 2.0, and FEED.

genomics, nanotechnology, The Economist and Red Herring’s view

[b”>
The locus of innovation
Have information technology and communications become boring?
[/b”>
by Jason Pontin
September 27, 2002
http://www.redherring.com/columns/2002/friday/lastword092702.html

The Economist said it, and therefore it must be true. In the latest Technology Quarterly, published in the September 21 issue of the news magazine, the editors write, “A glance at where, and for what, patents are now being granted, suggests that innovation has begun to move away from telecoms, computing, and ecommerce towards fresher pastures–especially in genomics and nanotechnology.”

Do we really believe this? Surely the smarmy British magazine has an answerable point when it notes, “The excessive exuberance during the run-up to the millennium has saddled the IT industry worldwide with $750 billion of debt and some $250 billion of overcapacity. That is an awfully big hangover to overcome.”

Nor have I forgotten that last week I essentially agreed with Charles Fitzgerald, Microsoft’s chief propagandist, when he said that, so far as software was concerned, “I am a believer in the mundane future.”

Finally, both as a future patient of drug and genetic therapies and as someone interested in new technology, I am excited by the convergence of the life sciences, computing, and nanotechnology. Imagine a future where quantum dots in your body detect a cellular catastrophe like an epileptic stroke or heart attack, and chips in your blood stream deliver a drug perfectly designed to stop that catastrophe before it can seriously harm your organism. All without serious side effects. Sound far-fetched? Science-fictional? It’s only years away; it’s in clinical trials now.

But I have been writing about information technology for almost a decade, and I am equally certain of one other thing: IT is as cyclical as a manic depressive’s moods. While this “bottom” exceeds in scale and seriousness anything in the history of computing, information technologists always seem to insist that their industry has become a boring, commodities-based sector just before a kid in some university dorm dreams up something that fundamentally changes the way businesses work and ordinary folks conduct their lives.

Alas, at the moment we don’t know what this something will be. Cringely’s Law says that in the short term things change much less than we expect, but that we have absolutely no idea what will happen in the long term. Therefore I believe this: biotechnology and nanotechnology will be the locus of innovation and wealth-creation in the immediate future. I recognize that certain structural difficulties in IT and telecom must be addressed before any renaissance can occur–specifically, all that debt and excess capacity must be reduced, and the “last mile” must be conquered and broadband Internet access brought to every American home at an affordable price.

But I will not write off IT quite yet. While the immediate future may be mundane, I am certain that further in the future we will have another computing revolution that will excite investors, consumers, and businesses as much as personal computers and the Internet once excited them. I think I even know what that revolution will be: an “always on,” distributed, intelligent network.

It’s a great time to be an entrepreneur. Capital is cheap, there are few distractions, and educated technical and professional labor is available. Go get ’em, tigers.

Write to [email”>jason.pontin@redherring.com[/email”>

Inventor foresees implanted sensors aiding brain functions

[b”>Inventor foresees implanted sensors aiding brain functions[/b”>
By Stephan Ohr, EE Times
Sep 26, 2002 (6:32 AM)

URL: http://www.eetimes.com/story/OEG20020926S0013

BOSTON — Using deliberately provocative predictions, speech-recognition pioneer Ray Kurzweil said that by 2030 nanosensors could be injected into the human bloodstream, implanted microchips could amplify or supplant some brain functions, and individuals could share memories and inner experiences by “beaming” them electronically to others.

Virtual reality can already amplify sensory experiences and spontaneously change an individual’s identity or sex, Kurzweil said in a keynote entitled “The Rapidly Shrinking Sensor: Merging Bodies and Brain,” at the Fall Sensors Expo conference and exhibition here.

Recently inducted into the National Inventors Hall of Fame for his work in speech synthesis and recognition, Kurzweil has also invented an “omni-font” optical character recognition system, a CCD flat-bed scanner and a full text-to-speech synthesizer.

Noting the accelerating rate of technological progress, Kurzweil said, “There is a much smaller time for ‘paradigm shifts’ — what took 50 years to develop in the past won’t take 50 years to develop in the future.” One-hundred years of progress might easily be reduced to 25 years or less, he said.

“Moore’s Law is just one example: All the progress of the 20th century could duplicate itself within the next 14 years,” Kurzweil said. By some measures, perhaps, the 21st century will represent 20,000 years of progress, he said. With such acceleration it becomes possible to visualize an interaction with technology that was previously reserved to science fiction writers, he said.

Current trends will make it possible to “reverse engineer” the human brain by 2020. And “$1,000 worth of computation,” which barely covered the cost of a 8088-based IBM PC in 1982, will offer 1,000 times the capability of the human brain by 2029.

Kurzweil was enthusiastic about his own experiments with virtual reality and artificial intelligence. “People say of AI, ‘Nothing ever came of that,’ yet it keeps spinning off new things,” he said. For example, British Airways has combined speech recognition and synthesis technology with virtual reality to create an interactive reservation system that allows a user to interact with a “virtual personality” to build a travel itinerary.

Via the Internet, Kurzweil demonstrated “Ramona,” a woman’s face that serves as an interactive interface to Kurzweil’s Web site.

[b”>Trading places[/b”>
By projecting a virtual reality onto the Internet, it is possible to exchange personalities or don another personality. A video clip presented during the Sensors Expo keynote demonstrated how Kurzweil became Ramona on another user’s screen. As Ramona, he performed a song and dance among a toe-stepping chorus of fat men in tutus. “That heavy set man behind me was my daughter,” Kurzweil said.

“AI is about making computers do intelligent things,” Kurzweil said amid laugher and applause. “In terms of ‘common sense,’ humans are more advanced than computers . . . Yet the human brain makes only about 200 calculations per second.” The computing machinery available in 2030 will be able to make 100 trillion connections and 1026 calculations per second, he said. And the memory footprint — 12 million bytes; Kurzweil could resist the jest — would be smaller than Microsoft Word.

Even now, manufacturers and research groups are experimenting with wearable computers utilizing magnetic and RF sensors embedded in clothing. Just as MIT’s wearable computers enable business users to exchange business cards simply by shaking hands, Kurzweil believes it will be possible to “beam” someone your experience, tapping all five senses.

With so much intelligence embodied in sensors and microchips, Kurzweil speculated that between 2030 and 2040 non-biological intelligence would become dominant. But his conjecture rejected the common image of the science-fiction cyborg: Instead of mechanically bonding with micromachines or “nano-bots,” might it be possible to swallow them like pills?, he ashed. Or to inject them directly into the bloodstream? Why not explore how such human-computer pairings could increase life expectancy?

Cochlea implants are already rebuilding the hearing of previously deaf patients, and implanted chips have been shown to aid the muscle control of patients with Parkinson’s disease.

Kurzweil also offered a possible downside to his images of humans merged with computing machinery, reminiscent of computer viruses: “Think of this: some year, self-replicating nanotechnology could be considered a form of cancer.”

Copyright 2002 © CMP Media, LLC

HELP, THE PRICE OF INFORMATION HAS FALLEN AND IT CAN’T GET UP

HELP, THE PRICE OF INFORMATION HAS FALLEN AND IT CAN’T GET UP [ACM, 04/97″>
http://www.shirky.com/writings/information_price.html

Among people who publish what is rather deprecatingly called ‘content’ on the Internet, there has been an oft repeated refrain which runs thusly:
‘Users will eventually pay for content.’

or sometimes, more petulantly,

‘Users will eventually have to pay for content.’

It seems worth noting that the people who think this are wrong.

The price of information has not only gone into free fall in the last few years, it is still in free fall now, it will continue to fall long before it hits bottom, and when it does whole categories of currently lucrative businesses will be either transfigured unrecognizably or completely wiped out, and there is nothing anyone can do about it.

ECONOMICS 101

The basic assumption behind the fond hope for direct user fees for content is a simple theory of pricing, sometimes called ‘cost plus’, where the price of any given thing is determined by figuring out its cost to produce and distribute, and then adding some profit margin. The profit margin for your groceries is in the 1-2% range, while the margin for diamonds is often greater than the original cost, i.e greater than 100%.

Using this theory, the value of information distributed online could theoretically be derived by deducting the costs of production and distribution of the physical objects (books, newspapers, CD-ROMs) from the final cost and reapplying the profit margin. If paying writers and editors for a book manuscript incurs 50% of the costs, and printing and distributing it makes up the other 50%, then offering the book as downloadable electronic text should theoretically cut 50% (but only 50%) of the cost.

If that book enjoys the same profit margins in its electronic version as in its physical version, then the overall profits will also be cut 50%, but this should (again, theoretically) still be enough profit to act as an incentive, since one could now produce two books for the same cost.

ECONOMICS 201

So what’s wrong with that theory? Why isn’t the price of the online version of your hometown newspaper equal to the cover price of the physical product minus the incremental costs of production and distribution? Why can’t you download the latest Tom Clancy novel for $8.97?

Remember the law of supply and demand? While there are many economic conditions which defy this old saw, its basic precepts are worth remembering. Prices rise when demand outstrips supply, even if both are falling. Prices fall when supply outstrips demand, even if both are rising. This second state describes the network perfectly, since the Web is growing even faster than the number of new users.

From the point of view of our hapless hopeful ‘content provider’, waiting for the largesse of beneficent users, the primary benefits from the network come in the form of cost savings from storage and distribution, and in access to users worldwide. From their point of view, using the network is (or ought to be) an enormous plus as a way of cutting costs.

This desire on the part of publishers of various stripes to cut costs by offering their wares over the network misconstrues what their readers are paying for. Much of what people are rewarding businesses for when they pay for ‘content’, even if they don’t recognize it, is not merely creating the content but producing and distributing it. Transporting dictionaries or magazines or weekly shoppers is hard work, and requires a significant investment. People are also paying for proximity, since the willingness of the producer to move newspapers 15 miles and books 1500 miles means that users only have to travel 15 feet to get a paper on their doorstep and 15 miles to get a book in the store.

Because of these difficulties in overcoming geography, there is some small upper limit to the number of players who can sucessfully make a business out of anything which requires such a distribution network. This in turn means that this small group (magazine publishers, bookstores, retail software outlets, etc.) can command relatively high profit margins.

ECONOMICS 100101101

The network changes all of that, in way ill-understood by many traditional publishers. Now that the cost of being a global publisher has dropped to an up-front investment of $1000 and a monthly fee of $19.95, (and those charges are half of what they were a year ago and twice what they will be a year from now), being able to offer your product more cheaply around the world offers no competitive edge, given that everyone else in the world, even people and organizations who were not formerly your competitors, can now effortlessly reach people in your geographic locale as well.

To take newspapers as a test case, there is a delicate equilibrium between profitibility and geography in the newspaper business. Most newspapers determine what regions they cover by finding (whether theoretically or experiemntally) the geographic perimeter where the cost of trucking the newspaper outweighs the willingess of the residents to pay for it. Over the decades, the US has settled into a patchwork of abutting borders of local and regional newspapers.

The Internet destroys any cost associated with geographic distribution, which means that even though each individual paper can now reach a much wider theoretical audience, the competition also increases for all papers by orders of magnitude. This much increased competition means that anyone who can figure out how to deliver a product to the consumer for free (usually by paying the writers and producers from advertising revenues instead of direct user fees, as network television does) will have a huge advantage over its competitors.

ITS HARD TO COMPETE WITH FREE.

To see how this would work, consider these three thought experiments showing how the cost to users of formerly expensive products can fall to zero, permanently.

Greeting Cards
Greeting card companies have a nominal product, a piece of folded paper with some combination of words and pictures on it. In reality, however, the greeting card business is mostly a service industry, where the service being sold is convenience. If greeting card companies kept all the cards in a central warehouse, and people needing to send a card had to order it days in advance, sales would plummet. The real selling point of greeting cards is immediate availability – they’re on every street corner and in every mall.

Considered in this light, it is easy to see how the network destroys any issue of convenience, since all Web sites are equally convenient (or inconvenient, depending on bandwidth) to get to. This ubiquity is a product of the network, so the value of an online ‘card’ is a fraction of its offline value. Likewise, since the costs of linking words and images has left the world of paper and ink for the altogether cheaper arena of HTML, all the greeting card sites on the Web offer their product for free, whether as a community service, as with the original MIT greeting card site, or as a free service to their users to encourage loyalty and get attention, as many magazine publishers now do.

Once a product has entered the world of the freebies used to sell boxes of cereal, it will never become a direct source of user fees again.

Classified Ads
Newspapers make an enormous proportion of their revenues on classified ads, for everything from baby clothes to used cars to rare coins. This is partly because the lack of serious competition in their geographic area allows them to charge relatively high prices. However, this arrangement is something of a kludge, since the things being sold have a much more intricate relationship to geography than newspapers do.

You might drive three miles to buy used baby clothes, thirty for a used car and sixty for rare coins. Thus, in the economically ideal classified ad scheme, all sellers would use one single classified database nationwide, and then buyers would simply limit their searches by area. This would maximize the choice available to the buyers and the cost able to be commanded by the sellers. It would also destroy a huge source of newspapers revenue.

This is happening now. The search engines like Yahoo and Lycos, the agora of the Web, are now offering classified ads as a service to get people to use their sites more. Unlike offline classified ads, however, the service is free to both buyer and seller, since the sites are both competing with one another for differentiators in their battle to survive, and they are extracting advertising revenue (on the order of one-half of one cent) every time a page on their site is viewed.

When a product can be profitable on gross revenues of one-half of one cent per use, anyone deriving income from traditional classifieds is doomed in the long run.

Real-time stock quotes
Real time stock quotes, like the ‘ticker’ you often see running at the bottom of financial TV shows, used to cost a few hundred dollars a month, when sold directly. However, much of that money went to maintaining the infrastructure necessary to get the data from point A, the stock exchange, to point B, you. When that data is sent over the Internet, the costs of that same trip fall to very near zero for both producer and consumer.

As with classified ads, once this cost is reduced, it is comparatively easy for online financial services to offer this formerly expensive service as a freebie, in the hopes that it will help them either acquire or retain customers. In less than two years, the price to the consumer has fallen from thousands of dollars annually to all but free, never to rise again.

There is an added twist with stock quotes, however. In the market, information is only valuable as a delta between what you know and what other people know – a piece of financial information which everyone knows is worthless, since the market has already accounted for it in the current prices. Thus, in addition to making real time financial data cost less to deliver, the Internet also makes it _worth_ less to have.

TIME AIN’T MONEY IF ALL YOU’VE GOT IS TIME

This last transformation is something of a conundrum – one of the principal effects of the much-touted ‘Information Economy’ is actually to devalue information more swiftly and more fully. Information is only power if it is hard to find and easy to hold, but in an arena where it is as fluid as water, value now has to come from elsewhere.

The Internet wipes out of both the difficulty and the expense of geographic barriers to distribution, and it does it for individuals and multi-national corporations alike. “Content as product” is giving way to “content as service”, where users won’t pay for the object but will pay for its manipulation (editorial imprimatur, instant delivery, custom editing, filtering by relevance, and so on.) In my next column, I will talk about what the rising fluidity and falling cost of pure information means for the networked economy, and how value can be derived from content when traditional pricing models have collapsed.

Weblogs and the Mass Amateurization of Publishing

First published on October 3, on the ‘Networks, Economics, and Culture’ mailing list
Subscribe to the Networks, Economics, and Culture mailing list.
http://shirky.com/writings/weblogs_publishing.html

A lot of people in the weblog world are asking “How can we make money doing this?” The answer is that most of us can’t. Weblogs are not a new kind of publishing that requires a new system of financial reward. Instead, weblogs mark a radical break. They are such an efficient tool for distributing the written word that they make publishing a financially worthless activity. It’s intuitively appealing to believe that by making the connection between writer and reader more direct, weblogs will improve the environment for direct payments as well, but the opposite is true. By removing the barriers to publishing, weblogs ensure that the few people who earn anything from their weblogs will make their money indirectly.

The search for direct fees is driven by the belief that, since weblogs make publishing easy, they should lower the barriers to becoming a professional writer. This assumption has it backwards, because mass professionalization is an oxymoron; a professional class implies a minority of members. The principal effect of weblogs is instead mass amateurization.

Mass amateurization is the web’s normal pattern. Travelocity doesn’t make everyone a travel agent. It undermines the value of being travel agent at all, by fixing the inefficiencies travel agents are paid to overcome one booking at a time. Weblogs fix the inefficiencies traditional publishers are paid to overcome one book at a time, and in a world where publishing is that efficient, it is no longer an activity worth paying for.

Traditional publishing creates value in two ways. The first is intrinsic: it takes real work to publish anything in print, and more work to store, ship, and sell it. Because the up-front costs are large, and because each additional copy generates some additional cost, the number of potential publishers is limited to organizations prepared to support these costs. (These are barriers to entry.) And since it’s most efficient to distribute those costs over the widest possible audience, big publishers will outperform little ones. (These are economies of scale.) The cost of print insures that there will be a small number of publishers, and of those, the big ones will have a disproportionately large market share.

Weblogs destroy this intrinsic value, because they are a platform for the unlimited reproduction and distribution of the written word, for a low and fixed cost. No barriers to entry, no economies of scale, no limits on supply.

Print publishing also creates extrinsic value, as an indicator of quality. A book’s physical presence says “Someone thought this was worth risking money on.” Because large-scale print publishing costs so much, anyone who wants to be a published author has to convince a professionally skeptical system to take that risk. You can see how much we rely on this signal of value by reflecting on our attitudes towards vanity press publications.

Weblogs destroy this extrinsic value as well. Print publishing acts as a filter, weblogs do not. Whatever you want to offer the world — a draft of your novel, your thoughts on the war, your shopping list — you get to do it, and any filtering happens after the fact, through mechanisms like blogdex and Google. Publishing your writing in a weblog creates none of the imprimatur of having it published in print.

This destruction of value is what makes weblogs so important. We want a world where global publishing is effortless. We want a world where you don’t have to ask for help or permission to write out loud. However, when we get that world we face the paradox of oxygen and gold. Oxygen is more vital to human life than gold, but because air is abundant, oxygen is free. Weblogs make writing as abundant as air, with the same effect on price. Prior to the web, people paid for most of the words they read. Now, for a large and growing number of us, most of the words we read cost us nothing.

Webloggers waiting for micropayments and other forms of direct user fees have failed to understand the enormity of these changes. Weblogs aren’t a form of micropublishing that now needs micropayments. By removing both costs and the barriers, weblogs have drained publishing of its financial value, making a coin of the realm unnecessary.

One obvious response is to restore print economics by creating artificial scarcity: readers can’t read if they don’t pay. However, the history of generating user fees through artificial scarcity is grim. Without barriers to entry, you will almost certainly have high-quality competition that costs nothing.

This leaves only indirect methods for revenue. Advertising and sponsorships are still around, of course. There is a glut of supply, but this suggests that over time advertising dollars will migrate to the Web as a low-cost alternative to traditional media. In a similar vein, there is direct marketing. The Amazon affiliate program is already providing income for several weblogs like Gizmodo and andrewsullivan.com.

Asking for donations is another method of generating income, via the Amazon and Paypal tip jars. This is the Web version of user-supported radio, where a few users become personal sponsors, donating enough money to encourage a weblogger to keep publishing for everyone. One possible improvement on the donations front would be weblog co-ops that gathered donations on behalf of a group of webloggers, and we can expect to see weblog tote bags and donor-only URLs during pledge drives, as the weblog world embraces the strategies of publicly supported media.

And then there’s print. Right now, the people who have profited most from weblogs are the people who’ve written books about weblogging. As long as ink on paper enjoys advantages over the screen, and as long as the economics make it possible to get readers to pay, the webloggers will be a de facto farm team for the publishers of books and magazines.

But the vast majority of weblogs are amateur and will stay amateur, because a medium where someone can publish globally for no cost is ideal for those who do it for the love of the thing. Rather than spawning a million micro-publishing empires, weblogs are becoming a vast and diffuse cocktail party, where most address not “the masses” but a small circle of readers, usually friends and colleagues. This is mass amateurization, and it points to a world where participating in the conversation is its own reward.