Around the beginning of the millenium, IBM had announced that they would start providing services using Linux (similar to what HP and Sun also did). As part of their effort, of course, they would release a lot of new code to the Linux kernel and drivers, make it more robust and stable, and make sure that Linux became a valid alternative as a corporate-grade operating system in the years to come.
The open source community's reaction was mixed. The strong supporters of open source software in the corporate environment were thrilled with the strong support given by IBM — they had seen IBM's previous support to things like Apache, for instance — which would give Linux an even higher credibility than what it enjoyed. The unthinkable became true: not many years afterwards, IBM, HP, Sun, and others started to ship their servers with Linux as an option, which was completely unthinkable in the late 1990s.
The die-hard open-source left-thinkers were not happy. They felt that IBM was "swallowing" them up, and turning into money the dozens of thousands of hours of labour and millions of lines of code put into a free product. They felt soiled and cheated. Ultimately, however, I believe that they were mostly unhappy because it was easier to complain about the "enemy" when it was "outside". Today, corporations, universities, and naturally enough, individual volunteers, all work together to make Linux a better and better operating system — and everybody has to agree that the way it works is the best way. Nobody — not even Microsoft, who have a Linux Division — can afford to shun Linux these days. It's "part of the establishment". Even if it's not popular on desktops, it still continues to dominate the server-side market.
Now IBM apparently is doing the same with Second Life!
According to Virtual World News, IBM has started to do a remote viewing application for data centres in 3D, using Second Life, as a prototype. This would basically allow system administrators to quickly have a visual look of how the data centre is performing, by getting extra clues (sounds, particle effects) if there are network or server issues. Getting better tools for system administrators to quickly locate the hotspots in the hardware is crucial. Virtual worlds are one of the ways for that, and IBM is definitely doing a great job at trying these out.
The most interesting aspect of that article is that IBM is now using the same concept, but using OpenSim. OpenSim is open source, of course, but it's as SL-compliant as possible (you use the same set of SL viewers to access it). This is actually good news. IBM could obviously develop a completely new virtual world for their experiments, or use any of the many that are freely available and tweak them for their use. They could also work together with LL to further develop LL's own server software — but this would make LL "dependant" on a strong agreement with IBM. Instead, they opted for the best of both worlds. They embraced the OpenSim project, like they did embrace Linux a few years ago. What they develop using OpenSim could be implemented on LL's own grid — but the point is, IBM can correct bugs and expand functionality using OpenSim pretty easily. They don't need to wait for LL's slow development cycles to get the functionality they need.
This makes me think about what it means for "the metaverse". Unlike many others, who expect that competition will pop up from every corner, now that metaverse-building worlds are "mainstream", I'm rather more skeptical, and the more time passes without anything being launched (we'll see if Sony Home launches in March...), the more skeptical I remain. Things like Kaneva are even more overhyped than SL was — and they're basically cutting corners to have a faster-rendering engine, at the cost of dropping unlimited user-created content. SL remains the leader in that area, and LL is the only company crazy enough to allow user-created content.
So what I think that will happen is something quite different. People love the concept of SL; but not the way LL handles all issues. The trick, then, is to get the code and tweak it, and run it under a different ToS (either more or less conservative, depending on your political agenda :) ). With SL, we can do that — or we can almost do that with OpenSim. IBM apparently spotted the opportunity. They don't want to create "another metaverse". They want SL. But SL run "the IBM way". With OpenSim, they can have both things :)
Now imagine what happens if "everybody" starts doing the same. Oh yes, there are already a handful of OpenSim-run grids — some already with several hundred sims! — and these will grow over 2008, as OpenSim slowly advances towards implementing all features that LL's own server has. And corporations planning to launch their own virtual worlds, what will they do? Start from scratch on a new project? Or download OpenSim for free, place a dozen programmers tweaking it, and launching something SL-compatible but run by someone who understands what users want? If I were in the business, I'd certainly go for the latter. In fact, almost two years ago, I started doing a business plan to run my own sub-grid, connected to LL's — and asked them how much the license for the servers costed. They said it was too early, they weren't prepared to license the code yet. Well... now I could do the same... and use OpenSim, and spare the costs of buying licenses by hiring developers instead to finish up the work.
So what I think will happen is something akin to what happened with the NCSA web server and Apache. NCSA launched the "first" web server, ages ago, when the first graphical web clients appeared as well. Everybody used the NCSA server and Mosaic as a client. But the programmers were tired of dealing with NCSA's slow response in developing additional features on their server software. It worked, yes, but the Web was new in 1993, and there was so much that could be done, and which NCSA didn't really plan to integrate on "their" server. The talented programmers switched over to launch a new project — Apache — which was mostly NCSA-compatible at start, but evolved hugely afterwards. In the end, at the non-Microsoft camp, there is only Apache left (except, of course, for very specialised cases) — because it encompassed the vast majority of programmers and system administrators that wanted a full-blown web server that worked well, was infinitely expandable and scaled well, and that had all the nifty features added as modules...
And what happened in the industry? People like IBM dropped their own in-house web servers, and simply started to use Apache instead. They understood that their revenue came from their applications and the services — not from licensing their own web server. As soon as that happened, they also started to contribute code (and debugging effort) to the core Apache code, making it even more robust. Today, excluding Microsoft, almost nobody uses their own proprietary web server any more — and few even know what "NCSA" stands for.
The OpenSim might very well become the "Metaverse Apache". We all want SL — users, universities, corporations. We don't want "limited-content" systems, even if they look nice and run faster. We don't want to place the content censorship in the hands of a company which might have popped up from nowhere and disappears after a few years. We want to capitalise on the 12 million or so who downloaded the SL client and have seen how it works. We want to use the 3 billion lines of code written in LSL, or the 0.5 Exabytes (that's a billion Gigabytes) of assets that are in SL. We don't want to waste that!
On the other hand, LL might also wish to get more programmers to make their own grid better — and not pay for them.
So what seems to be the natural progression here? Linden Lab might not release their own server software after all! In fact, a far better strategy would be to invest their time in developing OpenSim instead — and at some point in the future, simply switch over to OpenSim :) Sure, right now, LL is able to be the "technology pusher" — they show what is possible, document it, release the SL client code — and the communications protocol between the client and the server gets "absorbed" by the OpenSim team which reverse-engineers the process. But by doing so, they're working with fresh, clean, new code — which is easy to expand and maintain, unlike what happens with SL's own code.
LL might learn from the NCSA experience (who, btw, obviously run Apache too) and continue their official help to the OpenSim team, and, when OpenSim replicates all functionality of LL's own server software, simply move over to it.
Remember that OpenSim is just the server software: you need to add the "glue" to make the individual sims behave like they're on a continuous grid. That's the job of the asset servers. Right now, every OpenSim-based grid uses their own system (often they publish how it works, too), and, naturally, LL uses their own. LL will still remain the biggest SL-compatible grid ever — but I wouldn't be too surprised to see Zero Linden's prediction of having LL as a "central hub" for many interconnected grids and draw a revenue from interconnection fees — something like what Central Grid is experimenting with.
The future looks bright for Second Life — I mean, SL-compatible virtual worlds :)
[EDIT: Thanks to SignpostMarv Martin for the original tip and correcting some spelling and grammar errors]