Tim Sweeney is a game programmer. He is to Unreal what Jon Carmack is to Quake. I caught this interview with Tim Sweeney the other day in which he talks about the problems with the computer gaming industry.
Specifically how there is a broad graphics hardware base across the computing industry. From the integrated chips that ship with those two-hundred dollar specials at the local discount store to the high-end four thousand dollar custom-built machine.
His woes are our woes. And comparing his response to ours in the web development community makes for some interesting points.
His problem is diverse hardware. His games require a certain level of hardware which many simply do not have.
Our problem is diverse browser use. Our websites require Firefox 2 or Opera 8 to render perfectly, which many users simply do not have.
His response is rather brash. That computer manufacturers not be allowed to integrate cheap, low-end graphics hardware with their machines. That seems a bit drastic to me, especially when not every user needs to do high-res gaming, but just browse the internet.
Similar to us. Although the needs are defined by the developer. I have content I need to give to you. I need you to fit this requirement to access it.
But now the gap between low-end and high-end is so large that Tim Sweeney seems to think it’s not possible to have both a low and high end interface to the game without having to essentially rewrite the entire thing for both user groups. There’s no simple means to downgrade the high-end for low-end users.
That’s something we’ve been doing for years. Many sites would have the “flash or no-flash” versions. Two independently managed websites. But more and more I think web developers are taking Sweeney’s view in that we ditch the alternative website and say “tough” to those who don’t have the high-end software to view the site.
So what’s the easy, grand-unified solution?
There isn’t one. I think the inevitable in web design will be inevitable in gaming as well. Instead of universally-usable websites we’ll see some websites that maintain low-end requirements (indirectly targeting the low-end users) and then we’ll have high-end websites that low-end users simply won’t be able to use.
And text-based browsers (lynx), screen readers, screen scrapers (search engines are a form of screen scraper, btw) will simply be left out.
and this is the ironic, but funny part,
Web 32.3 will include a NEW protocol. One designed specifically for TEXT ONLY. A channel that will be used by search engines and data mining applications as well as screen readers. And it’ll look suspiciously like something Tim Berners-Lee created decades earlier.
And now my AMAZING IDEA
Here is my idea for Web 42.0, and this approach is something I think would work amazing well with gaming too. In fact this idea is something I first had years ago working on a computer game.
We have a data channel. This will be XML based. Content is delivered through this data channel. AND ONLY CONTENT! The data will be separate from the presentation of the data.
There will be a second channel. A presentation channel. Also XML based. This will dictate how data should be presented.
A client application would then be able to decide on its own how much or how little it follows the presentation channel’s instructions. So low-end users will almost entirely ignore the presentation channel since they can’t support the more intense presentation instructions in the presentation channel (music, 3-D graphics, etc). The format of of the presentation channel would be in such a way that it would build from broad, simple instructions (DisplayText) to finer and finer instructions encapsulated inside the base, broad, simple instruction. So a DisplayText instruction might contain text color information, size, font, alignment, 3-D space dimensions and alignment, event handlers (hover, click, etc), and interactivity instructions (is dragable, can be rotated, etc).
A client would then pick how deep into a command it goes. A text-based system would only go 1 deep (DisplayText) while the high-end system would go all the way down into pixel shader instructions for various objects.
The presentation stream would have to be a two-way channel in which the client sets the depth of detail ahead of time so as to not clog the stream up with information that the client will simply ignore. Although perhaps that’s not entirely a requirement, especially if the bandwidth was wide enough (or the protocol overhead small enough).
I think this is an approach that could work for both the web and for gaming. Especially as games move more and more to an online distribution format.
These concepts are not new. The separation of presentation logic and data shouldn’t be new to any web developer; we already do this, it’s called CSS. The data channel we already to as well, it’s called XHTML.
Essentially what I’m talking about here is a complete reworking of CSS. Or actually removing CSS and creating SOMETHING NEW which contains everything interface-like. Maybe this is XUL or some other Mozilla XML-based language that’s already out there. I don’t know.
But such a system would be accessible to anyone, regardless of browser version, computer hardware, or interface (not everyone need use a computer monitor to interact with these channels, you know!)
The biggest drawback of this sort of approach will be what I’ll call “Quality of Experience”. If it were a game, for example, perhaps the immersion wouldn’t be as intense on a text-based interface as it is with a big 3D display (although people who’ve played Zork may say otherwise).
However my feeling on that is these text-based users don’t know anything outside of their computing environment. If they don’t have the 3D hardware to see the full game they won’t know what they’re missing.
But perhaps the different interfaces will bring different interpretations and a game that is loved on the 3D side is despised on the text-based side (or vice versa). And that gap in QoE might be too much for the game publishers’ PR machines to want to deal with.
Personally I like the idea that not everyone has the same experience. I like the idea that there might be such a wide different in experience between the text-based user and the 3D user. Why? Because if nothing else it’ll start to teach people about different perspectives in experience. Maybe it’ll lead to a higher quality of game or experience for everyone. Maybe the strong visuals with weak story games will benefit when the text-based users start shouting “HEY, YOUR STORY IS CRAP” or the other way around when a good story is destroyed by really bad graphics.
I like differences in experience. I like the idea of people comparing them and using that information to develop a more robust and better experience for users.
But then again I tend to believe in silly Utopian ideas, and rarely do they ever turn out to be as bright as I imagine.