Web 2.0 vs Web 1.0: Why Ajax is Conceptually Better

Ajax has been around for quite some time now and it has lovers and haters. First of all let’s distinguish between two different things that are often confused when we talk about Ajax. What many think to be Ajax are really two independent things:

  • Asynchronous Communication: allows fetching and sending data asynchronously, in the background.
  • DOM-Manipulation: which allows all those fancy Flash-like effects on web pages, such as zooming images, floating windows, but also really basic stuff, like swapping text in boxes.

If we want to split hairs there is a third thing that makes up those Web 2.0 applications: a logic layer, which takes care of the execution of the application.

Many of the haters argue that by using these techniques the programmers make nice applications but make things more difficult, adding complexity to the client, which before Web 2.0 was a viewer, creating portability issues and last but not least excluding all those visitors that do not use a supported browser. But let’s go a bit deeper into the things involved, when building a web service. First of all we’re able to identify three different parts:

  1. The server side application: this may be as complex as you want but for our purposes we’ll just assume that it gets some input from the user, processes it, updates its internal state and returns some data to the user.
  2. The data: the input and output data of our service.
  3. The User Interface: this is where the user gets to see the data in all its beauty (or uglyness as it often happens).

Now in the “old” way of doing things, let’s just call it "Web1.0," parts 2 and 3 were clustered together. With the current development in Web 2.0 we tend to divide the layers 2 and 3 again by completely dividing the layout from the data. The idea is to download a relatively small engine that takes care of what happens on the page, and fetches asynchronously the data it needs from the server, often using data encapsulation methods such as JSON or XML.

This is actually one step forward, but in the meantime a return to the roots of the Web: we now serve “pure data” or content, without all the formatting clutter that scrambled our data back in Web 1.0. The reduction of redundancy is a nice side effect, not having to download the layout on every refresh. The fact that we serve pure data using standard formats and standard protocols allows us to create non-human interaction between a web service and a client, the data is available for further elaboration and is independant of its actual representation in the User Interface. And here we are, welcome to the Semantic Web.

Humans are capable of using the Web to, say, find the Swedish word for “car”, renew a library book, or find the cheapest DVD and buy it. But if you asked a computer to do the same thing, it wouldn’t know where to start. That is because web pages are designed to be read by people, not machines. The Semantic Web is a project aimed to make web pages understandable by computers, so that they can search websites and perform actions in a standardized way. The potential benefits are that computers can harness the enormous network of information and services on the Web. Your computer could, for example, automatically find the nearest manicurist to where you live and book an appointment for you that fits in with your schedule. A lot of the things that could be done with the Semantic Web could also be done without it, and indeed already are done in some cases. But the Semantic Web provides a standard which makes such services far easier to implement. [From Wikipedia]

The potential of the Semantic Web is just unimaginably huge, to say it with The Hitchhikers Guide to the Galaxy’s words: "it’s big. You just won’t believe how vastly hugely mindboggingly big it is." And there are already some examples:

  • MusicBrainz: Metadata for music
  • Last.fm: music profiling
  • flickr: imaging service
  • del.icio.us: bookmark sharing
  • FedEx
  • PayPal: Fast and easy payments online.
  • many more…

Remarkably we can find almost every big player of the Web 2.0 movement in the list, because it many services already rely for their internal workings on XML or JSON, so it becomes really easy to access them, even without the original “Web Frontend”.

This article appeared originally as a blog at Christian Decker's web site, http://snyke.net/blog/2006/03/30/why-ajax-is-conceptually-better/. Republished here with the permission of the author.

0 ratings
Aiden Reynolds
Aiden Reynolds
Aiden Reynolds is a content editor at WEB 2.0 JOURNAL. He was born and raised in New York, and has been interested in computer and technology since he was a child. He is also a hobbyist of artificial intelligence. Reynolds is known for his hard work ethic. He often puts in long hours at the office, and is always looking for new ways to improve his writing and reviewing skills. Despite his busy schedule, he still makes time for his interests, such as playing video games. In his free time, Reynolds enjoys spending time with his wife and two young children. He is also an active member of the community, and frequently volunteers his time to help out with local events.