Wednesday, February 16, 2005

Understand where the Web is headed



Understand where the Web is headed


Maybe it's obvious, but internet technology is moving faster right now than ever before. Blogging has become mainstream. Not just geeks are posting anymore, articles on content syndication are appearing on the cover of major business magazines. I enjoy watching the trends and am becoming quite the blogging evengelist. People outside the blogging circle seem to have a difficult time understanding the value of it, until they've tried it. The thing is, I don't think people try it until they see the value in it. Ironically, many of the experts (outside the tech industries) that have good content to provide to the community are those that have their systems, and don't want to change until it makes sense. They're falling behind very quickly, and as far as the Internet's concerned, Blogging is just the start of what is coming over the next couple years. I've been throwing together some notes over the last couple weeks, and the results follow. The comments are intended to spell out the differences between the web we've grown accustomed to, and the web of today & the near-future.






































YesterdayTomorrow
Technology Boundries
Online real-estate consists of URLs, and individual web sites. Other communication mediums like news groups, e-mail, and instant messaging co-exist in parallel to the web.Online real-estate is blurred as information channels merge and users join active online communities. New technologies and communication concepts like WikiWiki, Weblogs, Social-tagging networks enable authoratative users to provide and share content with each other easily. Content syndication (via RSS and other such feeds) and e-mail notifications help blur the line between web and other traditional internet technologies.
E-Trash
A majority of web content is used for marketing purposes. Search-engines and users are easily manipulated into visiting sites riddled with advertising and popups. This garbage dilutes the quality of information on the web, and gets in the way of efforts to organize information into useful clusters.Web content is much easier to add, since most sites leverage technologies like client-side applications, e-mail notification, XML messaging, content rating systems, context-based discussion forums, and anonymous feedback.
ROI - Return On Information
Website owners use web-development firms and applications for adding and managing content. Updates are slow, and therefore strategic. Providing content is an unnatural, calculated process. Up-to-the minute news is found on television, laced with the Network's insights, opinions, and bias, and freeze-dried to fit into a 4 minute segment. Hours later, it's on the Network's websites. Weeks after that, the real stories may surface somewhere online.Updates to content are real-time, and collaboration allows almost immediate correspondence between users. Experts in virtually every industry start to appear as knowledge authorities. Like-minded individuals work together now, instead of independently. Strong, subject-centered communities start to develop, and associations between people and industries become discoverable.
Structural Standardization
Personal sites are very amateurish, hard to read, and poorly managed.Collaboration and content is provided by premade sites that are created to be easy to use, and to look professional. Personal, homemade hacked-up websites start being replaced by family web portals, web logging, discussion, and wiki engines. Interfaces start becoming standardized, so users are immediately familiar with new sites.
Content Standardization
There is no standardization in HTML markup. Styles and markup vary depending on the developer, or the development platform that wrote the HTML. Aggregating this information for analysis against similar sites is virtually impossible. Web crawlers browse through entire web sites, following hyperlinks, and mindlessly collecting text.Content, also, is standardized. Collecting and analyzing information from sites is a trivial matter. Web crawlers can now capture information by user and subject. Instead of just collecting text, they monitor associations in subject matter, hyperlinks, and posting categories, and intelligently maintain a multi-dimentional network of associations between sites and individuals. Web crawlers are now able to organize communities at a much finer level than their predecessors.
Mechanical vs. Natural Content States
Web sites exist as a hodge-podge of scattered information. Hundreds of search engines exist to bring order to the chaos, but only a handful succeed. Even in the best of circumstances, there are hundreds of thousands of results commonly returned, and rankings are still more a result of marketing ingenuity than quality of content. The search-engines behave in a relatively mindless, mechanical state.Detailed information is being collected, and associations between people, content-types, and sites are being made. People are passively sharing knowledge with complete strangers. These strangers, in turn, are now learning at an accelerated rate, and at the same time, providing knowledge that they have to other communities. This cycle continues, and evolves, and for the first time, the existance and purpose of information becomes natural.


posted on Friday, February 11, 2005 8:36 PM

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home