Sunday, December 19, 2004

RSS Venture capitalists: Buyer beware!

Does the word "Enron" make your skin crawl? I suggest you get something for your stomach before continuing.

RSS and XML have been touted as the "new" new-thing for the public to embrace. Caution is warranted. RSS is good for late-breaking news from sources that can't get an official platform. Then again, so is Google. So much for the RSS benefits.

And a small problem for the RSS groupies--many of their claims don't stand up.

Claim 1: The venture capital bridge funding is going into new products.

Not much sense creating "consolidating mechanisms" like aggregators that perform no better than Google. At least Google is self-contained and self-supporting both in content and in software management. Aggregators require constant pruning.

Further, there are too many holes in the existing RSS-XML platforms to credibly believe that the money is going to new-future products. It appears the money is going to back-fill the holes that were not completed the first time around.

We have yet to have a good set of metrics for either the boards or independent auditors to ensure the RSS-XML backbone meets user-requirements. They've had plenty of conferences on standards, yet the developers still hate XML.

Auditors need to check where that funding is actually going; a good story on why those requirements have not already been completed; and how the "new requirements" are going to get completed when the funding is exhausted on the unfinished work.

Claim 2: There's a huge market

Let's also consider the time spent verifying claims. On one hand we have a simple validator that reports an "error," yet this error is irrelevant--the feed can still be read.

Also, the same site reports than less than 2,000 have used the site. Wow, so we have 2,000 people on a planet of 6.2B that have a valid feed. I'm not impressed. It's been how long since RSS introduced? Oh, that's right--the RSS user's group got cancelled because of lack of participation.

Auditors need to get a good understanding of the returns on advertising. Make some credible arguments that the "model that failed under the web/dot.com bubble is going to magically be solved under RSS."

I see nothing but handwaving and Genies. Where's Major Healey?

Claim 3: This has been audited by outside investment bankers.

How much money do they have in the firms they're covering? They don't audit, they professionally market things.

A real audit would look into the number of recurring error reports at a given syndication site and ask, "What's the plan to close out these reports" and inquire into the "why do we have confirmation messages coming back that the site is listed, nothing in the ping-reports that say there is an error, yet the site is not actually listed for subscription?"

We don't have these kinds of answers. Who knows what kinds of questions the boards of directors and audit committees "should" be asking under SarBox, but their eyes are glazed over as they were in the dot-scam mania of the 1990s.

Claim 4: The aggregators do all the work for you.

The aggregators are just glorified bookmarks. They don't actually do any work. At best, the aggregators simply create islands of content that outside search engines cannot penetrate. So much for the "seamless" web, and "hello" retro-grade internet development.

At worst, the aggregators perform worse than a simple bookmark list. This is a waste of time to create this list, transfer the data, and then spend x-months believing you're working with something better that is actually no better.

Congratulations on creating a mechanism to delay awareness of reality.

Claim 5: It's really easy to review many sites.

Wrong. It's more difficult. The current aggregators create the false expectation that things are easy. Actually, there are holes and a personal-book-mark-list works just fine.

It's absurd to suggest the "old web" could not be reviewed easily. Hello, can you say, "Links"? At least the "old web" could be searched, unlike the current aggregators do not expedite reading sites as one simply has to reload-the-feeds one by one.

There's no advantage to having an aggregator over having one privately list the site URLs. At least private bookmarks allow credible-reliable searching.

The existing aggregators have proven themselves wanting, inter alia.

Listed-sites with valid-feeds do not appear in the aggregator searches;

Valid-key-word searches for known content on the web produces no hits;

Content that has reported pinged does not get archived; and

There is not method to backfill the content-holes.

Claim 6: RSS saves time

Wrong. The feeds when complied simply create larger piles of sites to "not be able to get to." It takes time to re-load, re-check these URLs. That's time that could be spent actually reading content.

At best, RSS simply creates a new set of rules to confuse people and avoid credible outside oversight. Another delaying tactic. Indeed, the developers despite the confusion and lack of pointed questions still can't get their act together.

The community has a disjointed XML-backbone, the developers hate it, there are known holes in the system, no credible leadership to address these issues, and all the while the talk about how great it is. Can you say, "Which pod am I hiding the pea under?"

It's a shell game. The more shells, the greater the difficulty for oversight, workpackage-traces, and credible progress reports.

I remain skeptical of reports that individual web-surfers can surf hundreds of sites faster. We need some specific names, something more than a website touting this conclusory fact, and some real data to justify the conclusion.

Got time to kill? Then you don't need feeds. To get listed, you need a full weekend to load 100 feed URLs, document the report-back-messages, and then verify the messages actually generated an accurate status-message. Hello, "logic disconnect" that the Street loves to exploit.

Other questions For those who claim they can read 100s of RSS-feeds "more quickly," I ask you, "Did you ever actually read the sites to begin with; how much time did you spend loading up the feed-URLs into your aggregator-bookmark file; and how much repetitive content do you have between the common aggregators?" [Are your eyes glazing over -- exactly what they want.]

Lack of integration Current software and aggregators publicly available do not integrate feeds into a single easy-to-read platform like Google-news and you can customize Google-news searches.

Invalid status messages Much time spent listing, without results. I see no time-savings generated when time spent "self-reporting a site" amounts to a confirmation message of "you're listed," but a later-check cannot find the "listed" site.

Duplication There exists no method to screen out the duplication across platforms. It's no "time savings" to have similar content multiply-listed. Google permits content stacking--similar stories are grouped. Aggregators do not do this.

The ugly secret How much time do corporations spend re-sending data that cannot be taken via XML? We have yet to have credible independent tests demonstrating that time is saved. How much of your contract effort is going to pay for a "back-up system" that is actually the primary? Oh, we can't talk about that.

Returns on advertising Need to see something related to the real content they're looking at. I'm not convinced the "eyeballs on the feed" metrics amount to much [for purposes of estimating the potential advertising pools/revenues]. Again, need to look at returns-on-advertising, not simply eye-balls and dollars spent on ads.

Claim 7: XML brings the sites to you.

No change. Bookmark links also "brought the site to you."

XML is no different, or is this as opposed to the "old" web where you had to go look for things? Still have to look for things on the aggregator, which are no different than a professional link-archiving systems.

Further, other "bring the web page to you" systems mysteriously disappeared, yet XML-RSS self-reports "it is different". How? No answer.

Claim 8: You can syndicate easily.

Strike "easy." That is if you forget the errors, time spent, and the failures to correctly report listing status. Plus, even if you're syndicated, so what. This doesn't mean anyone is going to read it.

How much of your money is going to fund a developer's late-nite surfing over actual product development? If they're blogging "about other stuff", they're not developing.

Claim 9: It beats the alternatives.

What? Using Google, bookmarks or other search engines? At least I know when I use a search engine that I've got something.

With an aggregator, I can't get listed sites to search; nor can I be confident that the sites that have been self-reporting as "pinged" have actually saved the content in the archive.

Claim 10: It's a great platform.

For whom? The developers looking for another "pet project" to shop to the street? They've got endless error reports coming in, same errors, and no closure.

At worst, they do not support "well-supported" platforms. Why is money being spent to support efforts that will outline, catalog, and organize error reports for systems that the developers have no plans to ever integrate with or support? No answer from the developers.

Claim 11: It's a whole new adventure.

That's right. Venture capital is an adventure trying to get your money back. Hopefully you haven't mortgaged your home.

XML Venture capital. How long will it take for the truth to surface? The reason they're promoting XML and RSS as "great" is that the smart money has already left.

Don't be dumb. Thoroughly investigate anything related to RSS and XML.

0 Comments:

Post a Comment

Subscribe to Post Comments [Atom]

<< Home