An introduction to Orbital Theory: Understanding and Putting Value on Web 2.0 communities

December 28, 2006

This post is the first in a series that discusses Orbital Theory, a way of understanding Web 2.0 web sites as a business model and with that knowledge how to monetize them to maximize financial success.

If you’re following the attempted sale of sites like Digg and Facebook, you’ll no doubt recognize the usefulness of a better way of evaluating a site’s worth than it’s Alexa rank or it’s net revenue times a negotiated number. That’s exactly what Orbital Theory attempts to prove.


In May 2006, I was seriously approached by a start up company about acquiring Worth1000, an artistic contest web site I started. Even though Worth1000 was built in 2001, before the term “web 2.0” had yet to be popularized by Tim O’Reilly, it would definitely be considered a Web 2.0 website by his current standards. (Tim defined Web 2.0 as “a system that becomes more valuable any time more data is added to it” at a recent Collabnet conference I attended).

It was the first time a large company had made a serious attempt to purchase Worth1000, and though I was leery at the thought of selling it, I was curious what their offer would be. After a few phone calls, meeting their team in person and checking out their headquarters, they called me up a few days later with their offer: We’d like to offer 3 times your annual net revenue in the form of some cash upfront and the rest as stock.

Of course, I politely declined the offer and hung up the phone more than a little puzzled. Their site was a popular social meeting place and blog along the veins of MySpace, Facebook and Xanga. In other words, they were not currently profitable. Why then would they rely on a website’s net revenue then as a way of evaluating its worth? They would cleary know that wouldn’t be very tempting an offer for me and wierdest of all, it was ignoring something that anyone who runs a web2.0 site realizes immediately:

Your content-creating userbase is worth a heck of a lot more than your annual revenues might be.

I didn’t put much thought into it, but then in September I received yet another inquiry, this time from an associate from a venture capitol firm scouting for investments.

Some of the highlights of the questions he asked me when we discussing evaluation worth1000’s value essentially went like this:

  • “So how many unique visitors did you get per month?”
  • “How many registered members does your site have?”
  • “What’s your monthly gross revenue?”
  • “What’s your monthly net revenue?”

I tried to explain how those were somewhat useful, but ultimately innacurate ways of gauging a Web 2.0 site’s actual worth, but didn’t have a really clear way of explaining it in a way that anyone but a webmaster could understand and we didn’t followup after the call. Members who actually generate content and therefore attract more traffic into a site’s orbit are much, much more valuable than registered members or visitors who don’t. Or simply put:

Not all users are created equal.

I thought about how to explain this relationship to non-webmasters using a clear analogy off and on over the next couple of months, but never had a reason to put my thoughts into writing. Until this past Wednesday afternoon.

Sitting in my inbox was a rather innocuos looking email with the subject “Possible investment in”. It looked and read like a scam. Here it is in it’s totality, sans contact info:

“I manage venture investments for a well-known individual and he is interested in investing in Worth 1000. Is this something we can discuss in more depth?

I visited the domain part of the sender’s email address and got a “Page Cannot Be Found Error” and my Nigerian scammer alert went off. Before deleting the e-mail, curiousity got the best of me and I googled the domain instead. And whoa: The top result was a Wall Street Journal article discussing a very well-known entrepreneur who apparently uses this company name as a front for his personal investments.

I called up the number listed in the e-mail and got a cheery response.

“Hi Avi, thanks so much for calling me back. I represent _______________ and he would like to invest in Worth1000.”

I was floored. We chatted briefly about the site and my (and his) intentions for it. We agreed to talk more in depth this coming Thursday. Part of what we are to talk about is how to evaluate Worth1000 properly.

I finally had that impetus I needed to write this article. And here I am.


Orbital theory is more of an analogy than a theory. It compares websites and their visitors to planetary bodies in 3-D outer space instead of the flat (linear) Alexa approach to measuring traffic by headcount of all visitors.

There are 5 tenets of Orbital theory:

  1. Every user of a website has a unique mass, measurable by their gravity (how many other users they attract towards themselves).
  2. The larger a user’s mass, the stronger their personal gravity.
  3. Users with larger mass pull users with smaller mass into their orbit.
  4. Once in a site’s orbit, a user is now a measurable part of the website. His mass is a measurable part of a self-contained system.
  5. The more often a user returns to a site (the more loyal they are), the tighter and faster their orbit is.

What is important to note is that a visitor who visits a site once and never returns (or otherwise recommends others to the site) is not in orbit and therefore offers none of his mass to the site’s system for the purposes of evaluating it’s worth.

This shifts the most important statistic from “unique visitors per month” to “repeat participating vistors per month”.


A visitor who checks a site repeatedly throughout the day is more valuable than one who visits less frequently. They have a smaller/quicker orbit.

So where does the value come into play?

In a nutshell, grouping visitors by the total sum of the mass and speed of their orbiting bodies is a better way of evaluating their worth than grouping them by a flat head count of all visitors.

Here’s an example of a part of this theory in action: If you have regularly have just 2 visitors to your blog, and User A leaves a thought provoking comment on your post and User B only comes to read, User A has a greater mass than User B. If you decide to sell your blog, the total number of visitors (2) is an innacurate method of determining a site’s value, since it implies that both users are equal. A better way would be to take the sum of both visitor’s mass. Let’s assume that a visitor who does nothing except visit a page is worth +1 gravity as a starting point. So in our example User B is worth only +1. Anyone who attracts other users towards the site has the starting point of +1 gravity and also has an additional gravity by virtue of the attractiveness of their comment to other new users. If Google indexes the comment and 4 visitors find the blog post a result in the future (even if they never return), his gravity is +5 stronger (at the point he makes the post). If because of his post, he returns to a site repeatedly to see what other comments are posted, (and now speeds up his normal visiting habits to 5 times a day), his value to the site is even greater. He can now interact and has a stronger potential to attract even more new members by virtual of his more constant presence.

We’ll get a user’s value by multiplying their total gravity times their average daily orbital rate.

So in our example: User A (+5 mass *5 orbits) ‘s value is 25. That means he is worth 25 non-orbital 1-time visitors, by virtue of his one comment and repeat visits to the site.

Gravity can be a negative number as well. If a visitor posts spam to a site, or a popular member is discovered gaming a site like Digg, regular readers of the site may quit returning to it. That user would have negative gravity in that he repels other users from the system. Similarly, if your site gets too much traffic (i.e. Slashdot links to your site), and your site is flooded by people trying to read the article and crashes, those visitors who are contributing to the crash carry a negative gravity in that they are possibly souring established visitors who may leave the site’s orbit. Negative gravity is not particularly important for evaluating your site’s worth, but we’ll get back to it in a later post that focuses on enhancing the user experience and optimizing your site performance wise to deal with exponential mass growth.

For now, I hope the general outline of how this theory approaches evaluating users is of interest to those of you interested in Web2.0. Up next: Looking at the theory more in depth, some examples of evaluating real sites (i.e. Digg, Facebook, and Worth1000) and putting orbital theory into action to show how harnessing it correctly could enhance a site’s traffic and value.

I invite everyone to offer their own take on constructing this theory. Does it make sense? Do you have a better approach to it? I think as a group of minds coming together we can come up with something cool and useful.

Good SEO practice

December 14, 2006

Since creating Worth1000, (a terrible example of SEO), I have learned a lot about what works best in getting search engines to crawl and understand what your page is about. I have since created Plime, which is a great example of an optimized page. In this post, I’ll outline the theory I used in laying out my site to improve it’s readability by search engines.

This article is not about ranking higher in search enginges. That’s the next step. It’s about structuring your content in such a way that search engines will understand it and index it better.

Before I begin let me just explain the logic here. Think of a search engine as a stupid computer program, and understand that the program is programmed to read the data on your website in the form of a semantic hierarchy. The html on your site is really designed for people to read, so the trick is to also design it so that search engines can read it as well, in a way that doesn’t interfere with what people see. A well designed SEO page will not effect your page design AT ALL.

By semantic hierarchy I mean the program is looking for the following items, in this order to determine a page’s content:

  • Domain name
  • Page’s Title
  • Page’s URL structure (excluding domain)
  • Page’s Headline
  • Page’s Subheadline
  • Page’s Sub-subheadline
  • Page’s content
  • Bolded and italicized words within the content
  • Links within your page, linking to internal pages (breadcrumb trails,left navigation and right article list)
  • Links within your page, linking to external websites

I’ll go through each section with some tips on how to improve it and explain the logic for search engines and people, where the change will effect what people see.

Domain name
If someone owns “” and you start an article called “plime trees are delicious” they might get higher in search engine results for “plime trees are delicious” if anyone links to them, because domain names are the most important element in most search engines (since they are not editable, unlike html).

Page’s Title
Always include your site’s name (like and the actual headline for the page you are at. ( | Out of Bounds Contest) in your title tags. Search engines give this a lot of emphasis.

Page’s URL structure
If all of your pages have dynamic extentions (like .asp or .php) and use querystrings to point you to different dynamic articles you’re making a mistake.

An example of a bad URL would be:

This is bad for two easons:

  1. PageRank effects articles at the document level, which means querystrings are excluded (so you won’t develop separate pageranks for different articles… all will share the same rank).
  2. Some search engines and stat logs cut off the querystring whenindexing your content. Having a link in someone’s stat logs or in a non-Google engine that point to “” is useless and is a wasted opportunity to get more visitors and pagerank.

Structure your URLS like this instead:

All that’s important in there is the number 1163, but you’re already getting some important extra keywords in there that will help with SEO.

I don’t even need to change my code. I can simply use a program called modrewrite which will redirect that URL to my actual page: (the rest of the stuff is ignored by modrewrite, so I can use anything there). Search
engines will see it as a static page, even though it\’s exactly the same as it is now.

This also has the added bonus of users who link to this URL in forums that automatically parse URLS will now have some keywords as a part of the URL itself. This kind of clean url will definitely not annoy users either.

Page’s Heading and Subheadings

Don’t use fonts and CSS to make a headline appear big. Wrap it in <H1> tags instead.


A note to SEO folks: This tag is not as deprecated as you might think. <h1> is still semantically telling the search engine “this is the most important headline on this page” <h2> are less important for subheadlines, but if you have subheadlines n an article, using them can only help the spider understand how your content is structured and that is a good thing.

So long as you style the <h1> in the CSS the same way you have been making larger headlines in the past, users won’t visually see a difference at all.

Best of all about this tip is that very few sites make use of it.

Page’s content
This is your keyword text. Just make sure you have alt and title tags for all images, so search engines can read them. I’d
use <h3> for all captions under the images, since images (and the captions that explain them are usually important to the article. Search engines place minor emphasis on words that are put in between bold and italic tags in your content. Very minor emphasis, but just worth noting that if an article is about a certain celebrity for instance, and there name appears multiple times, bold each appearance of the same and it will help the spider think that celebrity is the focus of
the page. Don’t keyword stock though. Search engines will notice if the same words are appearing too often (called keyword density) and it will annoy your users in any case, so don’t try to stuff your page with redundant keywords. Always write your articles for your users.

Links within your page
You need to nake your navigation load before the page’s content so that spiders can access the rest of your site. You can use CSS to make this not effect your positioning of the navigation. This will help your links get indexed better. When a spider visits your site it sees the page as one huge block of html code, no matter how it’s laid out, so where in this block certain elements are placed is very important. Remember that using CSS you can have an element display visually to users first on the page, even if it’s the last thing to load in the html text block for spiders. If those links are all at the bottom of the text it interprets it as not important, and in some cases may not even download enough of the page to see them (some search engines only download the first 50,000 bytes of html).

Link to your sitemap prominently on your homepage if you have one. If you don’t then make one. This is really for the search engines more than the people, but it should even be helpful for those people who do see


All links in your page should include title tags. <a href=”; title=”blog”>blog</a>.

Links outside of your page
When linking to any content that isn’t related to your site (let’s say ou’re plugging a blog article about tech you enjoyed, but your site is about catfood), always add “rel=nofollow” to the hyperlink or else Google follows it and gives some of your outgoing page rank to them and also may assume that your sites are similar (since the logic is that sites only link to other
sites that will be interesting to their visitors).

Here is what I mean: <a href=”; target=”_blank” rel=”nofollow” title=”great read”>great read </a>.


If the site is something you think is related or your want to pass pagerank leave rel=nofollow out.