Why You Might Soon Feel More Secure about Insecure Software

BY SCOTT BERINATO
http://www.idg.net/go.cgi?id=775870

Application security—until now an oxymoron of the highest order, like “jumbo shrimp”—is finally changing for the better.

IF SOFTWARE WERE an office building, it would be built by a thousand carpenters, electricians and plumbers. Without architects. Or blueprints. It would look spectacular, but inside, the elevators would fail regularly. Thieves would have unfettered access through open vents at street level. Tenants would need consultants to move in. They would discover that the doors unlock whenever someone brews a pot of coffee. The builders would provide a repair kit and promise that such idiosyncrasies would not exist in the next skyscraper they build (which, by the way, tenants will be forced to move into).

Strangely, the tenants would be OK with all this. They’d tolerate the costs and the oddly comforting rhythm of failure and repair that came to dominate their lives. If someone asked, “Why do we put up with this building?” shoulders would be shrugged, hands tossed and sighs heaved. “That’s just how it is. Basically, buildings suck.”

The absurdity of this is the point, and it’s universal, because the software industry is strangely irrational and antithetical to common sense. It is perhaps the first industry ever in which shoddiness is not anathema—it’s simply expected. In many ways, shoddiness is the goal. “Don’t worry, be crappy,” Guy Kawasaki wrote in 2000 in his book, Rules for Revolutionaries: The Capitalist Manifesto for Creating and Marketing New Products and Services. “Revolutionary means you ship and then test,” he writes. “Lots of things made the first Mac in 1984 a piece of crap—but it was a revolutionary piece of crap.”

The only thing more shocking than the fact that Kawasaki’s iconoclasm passes as wisdom is that executives have spent billions of dollars endorsing it. They’ve invested—and reinvested—in software built to be revolutionary and not necessarily good. And when those products fail, or break, or allow bad guys in, the blame finds its way everywhere except to where it should go: on flawed products and the vendors that create them.

“We’ve developed a culture in which we don’t expect software to work well, where it’s OK for the marketplace to pay to serve as beta testers for software,” says Steve Cross, director and CEO of the Software Engineering Institute (SEI) at Carnegie Mellon University. “We just don’t apply the same demands that we do from other engineered artifacts. We pay for Windows the same as we would a toaster, and we expect the toaster to work every time. But if Windows crashes, well, that’s just how it is.”

Application security—until now an oxymoron of the highest order, like “jumbo shrimp”—is why we’re starting here, where we usually end. Because it’s finally changing.

A complex set of factors is conspiring to create a cultural shift away from the defeatist tolerance of “that’s just how it is” toward a new era of empowerment. Not only can software get better, it must get better, say executives. They wonder, Why is software so insecure? and then, What are we doing about it?

In fact, there’s good news when it comes to application security, but it’s not the good news you might expect. In fact, application security is changing for the better in a far more fundamental and profound way. Observers invoke the automotive industry’s quality wake-up call in the ’70s. One security expert summed up the quiet revolution with a giddy, “It’s happening. It’s finally happening.”

Even Kawasaki seems to be changing his rules. He says security is a migraine headache that has to be solved. “Don’t tell me how to make my website cooler,” he says. “Tell me how I can make it secure.”

“Don’t worry, be crappy” has evolved into “Don’t be crappy.” Software that doesn’t suck. What a revolutionary concept.

Why Is Software So Insecure?
Software applications lack viable security because, at first, they didn’t need it. “I graduated in computer science and learned nothing about security,” says Chris Wysopal, technical director at security consultancy @Stake. “Program isolation was your security.”

The code-writing trade grew up during an era when only two things mattered: features and deadlines. Get the software to do something, and do it as fast as possible.

Networking changed all that. It allowed someone to hack away at your software from somewhere else, mostly undetected. But it also meant that more people were using computers, so there was more demand for software. That led to more competition. Software vendors coded frantically to outwit competitors with more features sooner. That led to what one software developer called “featureitis.” Inflammation of the features.

Now, features make software do something, but they don’t stop it from unwittingly doing something else at the same time. E-mail attachments, for example, are a feature. But e-mail attachments help spread viruses. That is an unintended consequence—and the more features, the more unintended consequences.

By 1996, the Internet, supporting 16 million hosts, was a joke in terms of security, easily compromised by dedicated attackers. Teenagers were cracking anything they wanted to: NASA, the Pentagon, the Mexican finance ministry. The odd part is, while the world changed, software development did not. It stuck to its features/deadlines culture despite the security problem.

Even today, the software development methodologies most commonly used still cater to deadlines and features, and not security. Software development has been able to maintain its old-school, insecure approach because the technology industry adopted a less-than-ideal fix for the problem: security applications, a multibillion-dollar industry’s worth of new code to layer on top of programs that remain foundationally insecure. But there’s an important subtlety. Security features don’t improve application security. They simply guard insecure code and, once bypassed, can allow access to the entire enterprise.

In other words, the industry has put locks on the doors but not on the loading dock out back. Instead of securing networking protocols, firewalls are thrown up. Instead of building e-mail programs that defeat viruses, antivirus software is slapped on.

When the first major wave of Internet attacks hit in early 2000, security software was the savior, brought in at any expense to mitigate the problem. But attacks kept coming, and more recently, security software has lost much of its original appeal. That—combined with a bad economy, a new focus on national security, pending regulation that focuses on securing information and sheer fatigue from the constant barrage of attacks—spurred many software buyers to think differently about how to fix the security problem.

In addition, a bevy of new research was published that proves there is an ROI for vendors and users in building more secure code. Plus, a new class of software tools was developed to automatically ferret out the most gratuitous software flaws.

Put it all together, and you get—ta da!—change. And not just change, but profound change. In technology, change usually means more features, more innovation, more services and more enhancements. In any event, it’s the vendor defining the change. This time, the buyers are foisting on vendors a better kind of change. They’re forcing vendors to go back and fix the software that was built poorly in the first place. The suddenly efficacious corporate software consumer is holding vendors accountable. He is creating contractual liability and pushing legislation. He is threatening to take his budget elsewhere if the code doesn’t tighten up. And it’s not just empty rhetoric.

Says Scott Charney, chief security strategist at Microsoft, “Suddenly, executives are saying, We’re no longer just generically concerned about security.”

Berinato’s scrutiny of software security continues in “Forcing the Vendors to Get Secure”.

Posted in Technology Review.

Leave a Reply

Your email address will not be published. Required fields are marked *