Everywhere you turned for the longest time, people sang the praises of “Web 2.0 this” and “Web 2.0 that”, but did anyone ever actually stop and try to figure out what made something Web 2.0? Sure there was a radical shift in design aesthetics — where would be without rounded corners and cotton candy colored logos? — but did anyone ever come up with an official definition?
The answer is “yes and no.”
The term first popped up in 2004 with the O’Reilly Media Web 2.0 Conference and related to what was going to happen to the Internet in the wake of the great Dot Com Bust of a few years earlier. No one really seemed to expect the term to catch on it as it did, but people on the Internet do love their buzzwords. Tim O’Reilly, the founder of O’Reilly Media, felt compelled by Dec. 2006 to finally clarify the term a bit more in a blog post:
Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them.
My personal take on this was boiled down to: Web 2.0 means the Web site does something for you.
Through out Web 1.0, the sites were static and really amounted to nothing more than brochures: you put up information, people came to read it, they left. One of my favorite quotes to that effect was from Darren Barefoot who said, “Web 1.0 was about lectures, Web 2.0 is about conversation,” and I really couldn’t agree with him more.
The term “Web 2.0” may be on its last legs, and a lot of us are thankful for that, but its ramifications on information technology will be with us for years to come. There is no way that users will ever go back to not having control over how they consumer information, and they also won’t settle for not having a say in it.