Diverse User Base

Tim Sweeney is a game programmer. He is to Unreal what Jon Carmack is to Quake. I caught this interview with Tim Sweeney the other day in which he talks about the problems with the computer gaming industry.

Specifically how there is a broad graphics hardware base across the computing industry. From the integrated chips that ship with those two-hundred dollar specials at the local discount store to the high-end four thousand dollar custom-built machine.

His woes are our woes. And comparing his response to ours in the web development community makes for some interesting points.

His problem is diverse hardware. His games require a certain level of hardware which many simply do not have.

Our problem is diverse browser use. Our websites require Firefox 2 or Opera 8 to render perfectly, which many users simply do not have.

His response is rather brash. That computer manufacturers not be allowed to integrate cheap, low-end graphics hardware with their machines. That seems a bit drastic to me, especially when not every user needs to do high-res gaming, but just browse the internet.

Similar to us. Although the needs are defined by the developer. I have content I need to give to you. I need you to fit this requirement to access it.

He talks about how the original Unreal could use its software renderer at 320×240 resolution and it worked fine, but those with high-end hardware could upscale their graphics to 1024×768. In many ways this is how I see the way web sites should work. If you don’t have flash, javascript, and Firefox 2 (our “high-end” machine) you can still use the website. It’s a simple matter of using noscript tags, alternative links to download video and audio content (that’s normally embedded in the flash movie on the page) and AJAX-powered forms revert to the 1.0 method of moving data between server and client: the POST method.

But now the gap between low-end and high-end is so large that Tim Sweeney seems to think it’s not possible to have both a low and high end interface to the game without having to essentially rewrite the entire thing for both user groups. There’s no simple means to downgrade the high-end for low-end users.

That’s something we’ve been doing for years. Many sites would have the “flash or no-flash” versions. Two independently managed websites. But more and more I think web developers are taking Sweeney’s view in that we ditch the alternative website and say “tough” to those who don’t have the high-end software to view the site.

So what’s the easy, grand-unified solution?

There isn’t one. I think the inevitable in web design will be inevitable in gaming as well. Instead of universally-usable websites we’ll see some websites that maintain low-end requirements (indirectly targeting the low-end users) and then we’ll have high-end websites that low-end users simply won’t be able to use.

When Web 3.0, 4.0 or even 10.0 hits, we’ll see full applications written in some interpreted script (think Javascript, but with a much higher degree of access to client resources like graphics, filesystem, etc.) They’ll be very pretty. Fun to interact with. And provide instant feedback to users. There will be little more than virtual machines which load their OS from the web. There will be lots of debate about how an OS doesn’t belong in a browser; about how a browser serves a specific purpose and turning it into a general purpose application is wrong. Some might listen, but it won’t stop the inevitable.

And text-based browsers (lynx), screen readers, screen scrapers (search engines are a form of screen scraper, btw) will simply be left out.

OR

and this is the ironic, but funny part,

Web 32.3 will include a NEW protocol. One designed specifically for TEXT ONLY. A channel that will be used by search engines and data mining applications as well as screen readers. And it’ll look suspiciously like something Tim Berners-Lee created decades earlier.

And now my AMAZING IDEA

Here is my idea for Web 42.0, and this approach is something I think would work amazing well with gaming too. In fact this idea is something I first had years ago working on a computer game.

We have a data channel. This will be XML based. Content is delivered through this data channel. AND ONLY CONTENT! The data will be separate from the presentation of the data.

There will be a second channel. A presentation channel. Also XML based. This will dictate how data should be presented.

A client application would then be able to decide on its own how much or how little it follows the presentation channel’s instructions. So low-end users will almost entirely ignore the presentation channel since they can’t support the more intense presentation instructions in the presentation channel (music, 3-D graphics, etc). The format of of the presentation channel would be in such a way that it would build from broad, simple instructions (DisplayText) to finer and finer instructions encapsulated inside the base, broad, simple instruction. So a DisplayText instruction might contain text color information, size, font, alignment, 3-D space dimensions and alignment, event handlers (hover, click, etc), and interactivity instructions (is dragable, can be rotated, etc).

A client would then pick how deep into a command it goes. A text-based system would only go 1 deep (DisplayText) while the high-end system would go all the way down into pixel shader instructions for various objects.

The presentation stream would have to be a two-way channel in which the client sets the depth of detail ahead of time so as to not clog the stream up with information that the client will simply ignore. Although perhaps that’s not entirely a requirement, especially if the bandwidth was wide enough (or the protocol overhead small enough).

I think this is an approach that could work for both the web and for gaming. Especially as games move more and more to an online distribution format.

These concepts are not new. The separation of presentation logic and data shouldn’t be new to any web developer; we already do this, it’s called CSS. The data channel we already to as well, it’s called XHTML.

Essentially what I’m talking about here is a complete reworking of CSS. Or actually removing CSS and creating SOMETHING NEW which contains everything interface-like. Maybe this is XUL or some other Mozilla XML-based language that’s already out there. I don’t know.

But such a system would be accessible to anyone, regardless of browser version, computer hardware, or interface (not everyone need use a computer monitor to interact with these channels, you know!)

The biggest drawback of this sort of approach will be what I’ll call “Quality of Experience”. If it were a game, for example, perhaps the immersion wouldn’t be as intense on a text-based interface as it is with a big 3D display (although people who’ve played Zork may say otherwise).

However my feeling on that is these text-based users don’t know anything outside of their computing environment. If they don’t have the 3D hardware to see the full game they won’t know what they’re missing.

But perhaps the different interfaces will bring different interpretations and a game that is loved on the 3D side is despised on the text-based side (or vice versa). And that gap in QoE might be too much for the game publishers’ PR machines to want to deal with.

Personally I like the idea that not everyone has the same experience. I like the idea that there might be such a wide different in experience between the text-based user and the 3D user. Why? Because if nothing else it’ll start to teach people about different perspectives in experience. Maybe it’ll lead to a higher quality of game or experience for everyone. Maybe the strong visuals with weak story games will benefit when the text-based users start shouting “HEY, YOUR STORY IS CRAP” or the other way around when a good story is destroyed by really bad graphics.

I like differences in experience. I like the idea of people comparing them and using that information to develop a more robust and better experience for users.

But then again I tend to believe in silly Utopian ideas, and rarely do they ever turn out to be as bright as I imagine.

IE 8 & Spiders

IE8 – Beta 1 is out and you can download it if you dare.

I dared. But the install is failing on my machine at work for some reason. I’ll have to get it up on my home machine.

In the meantime I’m told there’s a transparency issue with Ruthsarian Menus in IE8.

Albandi has posted to the comments of this blog with a fix which I shall now relay to you the reader:

ul.rMenu li ul a {
z-index:90;
background-color:#fffde8
}

Once I’ve got IE8 up and running and I can test this out myself I post a fixed CSS.

Now for shameless plugging. As you probably know if you read this blog I’m a bit of a fan of the Disney animated show Gargoyles.

Well many of the people who worked on Gargoyles are back with a new series called The Spectacular Spider-Man. It premiers this Saturday, March 8, in the US on Kid’s WB at 10am. Die-hard Spider-Man fans will not be disappointed. Greg Weisman, co-creator of Gargoyles and the executive producer on the show (as well as writer of the first episode) comes from a background in comic books getting his first job writing for DC Comics’ Captain Atom.

He’s more than well-versed in all the variations on Spider-Man and I think everyone, from the die-hard Lee/Ditko fan, to the 10 year-old who only knows of Spider-Man through those movies, will be entertained.

So check it out. This Saturday, 10am on the CW.