Responsive Web Design

Responsive web design is a term used to describe a web design that adapts itself to the end-user’s environment. This isn’t a new concept, but advancements in browser technology as well as a greater increase in viewing environment (desktop, netbook, smartphone, etc.) is allowing for some new and interesting tricks to our bag.

Responsive web design is something that’s been around since the first web designs came out utilizing percent values to define the widths of objects, such as TABLE, DIV, and IMG elements. The idea was that a person on a 640×480 resolution screen can view the web site, but a person with a larger resolution like 1024×768 would be able to see more content on their screen as the layout would expand to the edges of the larger viewport (the area in which a web page is rendered). This idea was somewhat radical back in the early 1990s as most layouts would simply set a fixed width for the layout (usually 780 or 1000 pixels) and state quite plainly on the web site that it was designed to be “best viewed in 1024×768”.

Things got more interesting with CSS. Web designs could now use percentages for margins, padding, and placement of elements within a page. Layouts that would expand and contract with the viewport could be more complicated. Especially handy were the CSS max-width and min-width attributes. These allowed you to set limits on just how much your layout would expand or contract before forcing the viewport to stop resizing the layout. Now layouts could expand and contract, but not so much that they became unusable.

Enter CSS3 media queries.

But first, a small caveat to web developers.

The CSS3 spec is not yet final. In fact there isn’t (or is, depending on your point of view) really a CSS3 spec at all. CSS3 is a collection of modules and therefore somewhat open-ended. You can see a list of some of the CSS3 modules and their curren status here. Some CSS3 modules are currently in an early format and not recommended for implementation, while others have reached a state where it’s not quite final, but browser authors are encouraged to implement these modules and supply feedback to help refine the spec before it becomes final. Media queries are in such a state and as such several modern browsers have already implemented them including the latest releases of Firefox, Chrome, Safari and Opera. So web developers can start making use of them now and those users who run modern web browsers will be able to enjoy your use of CSS3.

At the same time, I do not recommended you rely heavily on CSS3 modules in any designs for the near future, or that if you do, you do so while making sure to check compatibility of the layout against older browsers. While the general public appears to be more capable at keeping up to date with their browser (mainly thanks to the automated update features of several OSes and the increased proliferation of broadband internet connections),  there’s still a significant number of users (over 50% according to this site) who are using Internet Explorer 8 or earlier, which does not support media queries (or much else in the way of CSS3). IE9 will support media queries.

Now that that’s out of the way, what can media queries do for us? Simply stated, you can apply styles based on many attributes of the viewport. To keep things brief-ish and to the point, I’m going to specifically focus on width-related media queries. Let’s see what one looks like:

#frame { width: 1000px; margin: 0 auto; }
@media screen and ( max-width: 480px ) {
  #frame {
    width: 100%;
  }
}

The above example will make an element with the id of “frame” to be 1,000 pixels wide and will center (via the margin value auto for the left and right margin properties) it in the middle of the viewport if the viewport is wider than 1,000 pixels. On the second line is a media query. It begins with @media screen (in place of “screen” you can use “all” for any media, or “print” for print media, etc.) which lets the browser know that the following block of CSS should only be applied to the screen. This is followed by and ( max-width: 480px ) which is the media query. It states that the following block of rules should also only be applied when the viewport is at or less than 480 pixels in width. You can also use min-width to tell the browser the block of rules should only be applied when the viewport is at least 480 pixels wide.

Why would you want such a thing? If a viewport is that narrow and you have wide elements or you have multiple columns, chances are those elements would be very usable at such a narrow width. With a media query you can change the layout to something more usable on a small screen. For example, consider a layout that has multiple columns. At 480 pixels your columns will barely fit a few words per line, if any at all. Reading text becomes very difficult. With a media query you can drop your multiple columns into a single column layout. Now each column will have the full width of the viewport to render its content, making it much more usable.

In the example above I am removing the fixed-width on the #frame element and resetting its width to 100% of the viewport. Smartphone users will be able to view the content of the layout much more easily and without having to scroll horizontally to read each line.

Excellent! Now go and design for the masses!

But… not quite yet. There’s one problem. Some smartphones are a bit too smart. They understand not many web designs are out there that take advantage of these features. Instead these smartphones will scale the rendering of the page. While the screen may only have 480 pixels of width, it’ll render web pages as if it had 1024 pixels of width and then scale it down to fit the screen. Media queries alone won’t fix this problem.

However, there is a solution.

The viewport meta tag allows you to tell browsers how to scale the web page. Most mobile browsers understand this tag including the mobile version of Safari found on may Apple devices and the mobile version of Opera which is found just about everywhere else. So to fix the mobile browser scaling issues, just include the following META tag into your web page:

<meta name="viewport" content="width=device-width; initial-scale=1.0;">

This tag tells the browser to render the web page with a 1-to-1 pixel scaling. Meaning 1 pixel of your web site is 1 pixel on the device’s screen. There are many other attributes you can include in this META tag such as maximum and minimum scaling values or whether the user is allowed to scale the page. Disabling user scaling effectively disables the pinch-zoom feature of many mobile devices. I recommend against it.

One last thing to touch upon before I give you a working example. It’s a simple block of CSS that will help make everything run much more smoothly on a mobile browser:

img { max-width: 100%; }

This little CSS rule allows images to shrink as needed as their containing element shrinks. So a 600 pixel wide image will shrink enough to fit on a 480 pixel wide browser. It’s a cheap, but very useful trick. However I would suggest that you don’t use the exact CSS rule given above. To apply such a thing to all images may have unexpected negative effects on images that you don’t want to have shrink as needed. Instead use a more specific CSS selector such as

#frame .blog-post .post-content img { max-width: 100%; }

The more specific you are, the less surprises you’ll have down the road.

As promised, I have put together a demonstration web page that utilizes all these techniques. Grab a modern browser that isn’t Internet Explorer, open the layout, and start shrinking and growing your browser’s window to see the content and layout change to fit within the width of your window. Pop it open on your smartphone’s browser as well!

MARS: Responsive Layout Demo

The stylesheet is embedded in the source to encourage you to take a look at the whole source for the page. It has also been given some CSS hacks so that the layout will work even in IE 7 and IE 6 (in theory), but all the media query stuff will obviously not work.

One final note.

At home I have a very wide (16:10) monitor, while at work I have a standard (4:3) monitor. I like to set max-widths to my layouts because keeping the distance from end to end of a given line of text small helps reduce eyestrain and the whitespace on either side of the layouts is more visually pleasing. But on a very wide monitor, the whitespace on the sides is too much and it makes the layout look very small. With media queries I can fix this problem by increasing the max-width of my layout frame for those screens with a very large width (over 1280 pixels). I could also increase the font size to help with readability. The point is that media queries aren’t just for keeping your pages usable on a smartphone, but they also help keep them usable on very large screens as well.

A Brief Ping

It’s been nearly half a year since my last post and that needs to be rectified.

There are a few things worth talking about such as the situation with the HTML spec. W3C maintains one HTML spec, while WHATWG maintains another. They were playing nicely, but there’s been a small shake-up recently with WHATWG deciding to drop version numbers from the spec, while the W3C will continue to call the latest spec HTML5. How the hell did we wind up in this situation? Well WHATWG has a great piece in their HTML spec that gives a brief history of the HTML spec. It’s not too long and very much worth reading to get an idea of where HTML came from and where it’s going.

News web sites tend to have a template ready to go whenever there’s “breaking news”. It’s often a big, brightly colored box with bold text usually stating “BREAKING NEWS”. This is quite an attention getter. A fact that is not lost on modern news web site developers. Lately “BREAKING NEWS” web stingers are being used to report extremely mundane news. The reason they are using these stingers at all is to simply grab your attention and get you to click through to read the full story. The problem is these news sites are quickly diluting the news value of their “BREAKING NEWS” stingers and sooner rather than later we’ll start to ignore them alltogether. So what will news web sites do when real breaking news happens? I have some thoughts on that which I’ll save for a later blog post, but it’s something to consider.

A small gift to those who have read this far. Here is a template I put together last week to test palette options. This isn’t a layout. You could use it for a web site, but I haven’t done anything to make it compatible with older browsers. It’s more just to test potential color palettes for web sites. And a great place to find palettes is COLOURlovers.

Now that I’ve given away one of my design secrets, I’m off. I expect there will be less of a delay between this and my next post.

A Look Into The Crystal Ball

“The Cloud” is a buzzword meaning third-party hosted internet applications.

To the individual this means being able to manage your content from any location that provides some form of internet access. Things like gmail, Google Docs, Flickr, WordPress.com and YouTube are examples.

To the corporation it’s a form of outsourcing. Gone are the days of large data centers to manage corporate information. Now the information is stored in the cloud. Now it’s someone else’s nightmare to manage.

It’s also a security nightmare for those who dare to take a moment to consider the security costs rather than the monetary costs associated with “the cloud”. You lose control of your information. You’re putting it into the hands of a third-party. You may have a contract with them that makes them responsible for any security breaches. In fact some managers prefer the cloud specifically because there’s a security contract that, in legal form at least, takes responsibility off their heads. But that doesn’t mean a security breach won’t happen. And when it does, when the genie is out of the bottle, what becomes more significant, that the information is out there or that you’re not going to get fired?

Sadly, I think most IT managers would answer the latter.

Privacy is one aspect of security, but it’s a concept that’s slowly starting to catch on. The public is slowly and painfully becoming aware that putting all their information out on third-party sites, that probably doesn’t have the individual’s best interests in mind, is a bad idea. The recent stir surrounding Facebook’s privacy issues is one example of this.

And as much as I would love to see this catch hold and become a driving force that tears apart “The Cloud” and everything “2.0”, it won’t. People will be quick to forgive or forget or to tolerate those privacy issues in return for easy access to information and entertainment.

“The Cloud” is probably not going to go away. In fact, it’s probably going to be the future. And it will be incredibly attractive.

I’m going to focus mainly on “the Cloud” from the individual perspective.

Imagine an iTunes subscription. You could stream any music you want to any device you have, be it your phone while on the way to work or a tablet while you’re at home reading a book on it or a set-top box at your friend’s house who needs some background music for his or her party. Or from your hotel room while on a trip. You don’t have to carry any actual electronic device with you. Your music is in “The Cloud” and you can stream it from any electronic device.

Now apply the same idea to NetFlix or Hulu. Actually, NetFlix and Hulu (in many ways) already do exactly this.

E-mail? Photos? Documents? Gmail. Flickr. Google Docs.

All your information is in “The Cloud”. Ready for you to pull it up on any internet-enabled device, from your iPhone to a computer in an internet cafe halfway around the world.

This type of access already exists, but the interface is too clunky. I imagine a near future with some sort of set top device that you plug into a television that would provide all of this for you. It would become a common feature at most hotels and certainly everyone from grandma on down would have one in their house. No real computer needed anymore, just a thin client with a web browser and maybe a hardware video decoder.

And for the high-end user there would be the portable device. Something like an iPhone4 but with extra features like a micro projector to watch movies on the wall (who needs a TV?) and a small suction-cup device that turns any flat surface (window, table, etc) into a large speaker to provide clear sound.

There would be movie parties. Where everyone would sign into a private room on NetFlix and could watch the same movie and talk to each other, while each person sits in their own room separated by hundreds of miles.

Friendships would no longer need face-to-face meetings. Everyone becomes an avatar. A projected personality that may or may not relate to the individual’s physical presence.

Soon meetings would be conducted in a similar fashion. No longer do we need a boardroom, simply start up your telepresence app and everyone sees and hears each other without ever having to leave their cubicle.

Of course in such a world a room full of cubicles with dozens of separate conversations going on would create quite a bit of background noise and interference. Which is why cubicles would become something more like miniature offices with sound proofs walls and no windows.

You don’t really need to see outside. Just start up the local weather app to project an image of what it looks like outside; tailor-made to your preferred surroundings of a wooded area or urban setting.

Eventually such archaic devices like projectors, speakers, and microphones will be made obsolete with brain implants. Telepresence becomes more real. You don’t just see, you feel and smell and taste and touch. Physical contact with others can be achieved even though your thousands of miles apart.

The porn industry reaches its full potential. It’s not prostitution anymore, it’s all virtual. Every depraved and deviant fetish is now catered to, and it’s all virtual, it’s all fake. But it will feel very real. And what is real, but signals processed by your brain. It is real.

And one day some alien race might finally find our planet. Perhaps touch down and have a look around. They’ll walk the halls of large buildings filled with small, personal-refrigerator sized cubicles. Automated machines keep everything running smoothly. Perhaps interest gets the better of them. They peek inside one of the refrigerators and find a curious gray mass in a container of goo with some probes.

Say hello, then, to your grandchildren a million years from now.

In The Zone With Time

The account-expires attribute in Active Directory carries a value of time in 100 nanosecond units since January 1, 1601 00:00 UTC. In fact quite a lot of Microsoft products are now using this epoch. Java and Unix use January 1, 1970 00:00 UTC as their epoch.

This is important information if you’re ever going to develop code in ColdFusion that needs to, for example, figure out whether or not a domain account has expired.

Another piece of information that’s handy to have is an understanding of how ColdFusion handles large numbers. Numeric values are handled as signed 32-bit values that can range from -2,147,483,648 to 2,147,483,647. ColdFusion can handle much larger numbers, but it does so by converting numeric values into strings and then using its own library to handle math operations based on these strings. And it stores these large values in scientific notation. The problem with this is you start to lose precision. For example, if you run the following CF snippet:

<cfoutput>#( 2147483648 * 2147483648 )#</cfoutput><br>
<cfoutput>#( 2147483648 * 2147483648 + 1)#</cfoutput>

The output is the same for both operations, “4.61168601843E+018“. This loss of precision makes it impossible to calculate the exact time account-expires refers to. So what else can we do?

One trick that sort of works is to divide the value in account-expires
by 10000000 to convert from 100 nanoseconds to seconds. This will significantly shorten the number. So let’s try that on an account-expires value of “129247307050000000” (that’s 7/27/10 @2:58.25 EDT). The line of CF looks like this:

<cfoutput>#DateAdd( "s", ( 129247307050000000 / 10000000 ), "01/01/1601 12:00 AM")#</cfoutput>

Try to run this code and you get the error message “Cannot convert the value 1.2924730705E10 to an integer because it cannot fit inside an integer.”. It turns out that DateAdd() can’t handle a number larger than 2147483647 (the limit of a 32-bit signed integer). 2,147,483,647 seconds is about 70 years of time. To use DateAdd() we need to make the account-expires value much smaller. We could, for example, calculate the account-expires value with respect to a different epoch, one that’s within 70 years of the time referenced by account-expires. We could use any date we want, but I’m going to stick with Java’s epoch of January 1, 1970 to keep things in-line with what’s probably the most common epoch among computer systems.

The number of seconds between January 1, 1601 and January 1, 1970 is 11,644,473,600. If we plug this into the code above we get this:

<cfoutput>#DateAdd( "s", ( 129247307050000000 / 10000000 ) - 11644473600, "01/01/1970 12:00 AM")#</cfoutput>

Run this and we get a result! Specifically {ts '2010-07-27 19:58:25'}. Remember this is in UTC, we’ll need to convert to our local timezone. We’ll do that in just a moment. But first…

You should have recognized by now that this solution won’t work forever. It will fail whenever the data in account-expires is after January 19, 2038. Perhaps this doesn’t bother you since whatever applications you’re developing now will surely be either obsolete or running on a more modern ColdFusion system where DateAdd() can deal with larger numbers before account-expires with that large a value are in use.

Perhaps. But I hate making assumptions like that. Plan for the future! Assume nothing!

If we want lasting code free of the 2038 limitation we’ll need to steer clear of ColdFusion date and time functions. We can do this by using Java objects, specifically java.util.Calendar to manage date and time, and java.math.BigInteger to handle the very large numbers we’ll be working with.

The process is fairly straightforward, but there are a few things to watch out for. Java time is measured in milliseconds, not 100 nanoseconds, so we’ll have to do some conversion. Java and Microsoft’s epochs are different. Calendar objects with a date and time earlier than Java’s epoch will have a negative millisecond value. And we need to calculate this date/time with respect to UTC (GMT timezone) not the local timezone.

First create a BigInteger object. Initialize it to the value of account-expires, and then convert it to milliseconds as that’s the unit of time Java likes to work with.

01: <cfset variables.bigInt = CreateObject( "java", "java.math.BigInteger" ) />
02: <cfset variables.expTime = variables.bigInt.init( JavaCast( "String", "#arguments.accountExpires#" )) />
03: <cfset variables.expTime = variables.expTime.divide( variables.bigInt.valueOf( "10000" )) />

You can initialize BigInteger objects with strings and numerical values. I’m specifically casting account-expires as a string so that ColdFusion doesn’t treat it like an integer and try to put it into scientific notation. Note in line 3 the use of BigInteger’s valueOf() function. This takes a string and returns a BigInteger value. Math functions for BigInteger objects require BitInteger arguments, thus we need to pass all values (even static ones) through valueOf() instead of JavaCast().

Next we set up the calendar object. This means creating a java.util.TimeZone object, a java.util.Calendar object, and then initializing the calendar to the GMT timezone. Once the calendar object is initialized we set it to Microsoft’s epoch.

04: <cfset variables.tz = CreateObject( "java", "java.util.TimeZone" ) />
05: <cfset variables.Calendar = CreateObject( "java", "java.util.Calendar" ) />
06: <cfset variables.gCal = variables.Calendar.getInstance( tz.getTimeZone( "GMT" )) />
07: <cfset variables.gCal.set( 1601, 0, 1, 0, 0, 0 ) />

Note that the second argument of the calendar object’s set() function is 0. Java’s calendar object’s month index begins with 0 for January instead of 1. This is something to be aware of whenever dealing with Java calendar objects.

This next step requires a little explanation. variables.expTime is now in milliseconds thanks to line 3. I’m going to add the value of the calendar, in milliseconds, to variables.expTime. Because the calendar is set to a date before Java’s epoch it will actually be a negative number. The addition operation is actually a subtraction operation. The resulting value of variables.expTime will be the number of milliseconds since Java’s epoch.

08: <cfset variables.expTime = variables.expTime.add( variables.bigInt.valueOf( "#variables.gCal.getTimeInMillis()#" )) />

All we need to do now is reset the calendar to the value we now have in variables.expTime, then convert it to our local timezone and we’re done!

09: <cfset variables.gCal.setTimeInMillis( variables.expTime.longValue() ) />
10: <cfset variables.gCal.setTimeZone( tz.getDefault() ) />

Last, but not least, we convert the calendar object into a ColdFusion date/time object with ColdFusion’s CreateDateTime() function.

11: <cfset variables.expDateTime = CreateDateTime(
variables.gCal.get( variables.gCal.YEAR ),
variables.gCal.get( variables.gCal.MONTH ) + 1 ,
variables.gCal.get( variables.gCal.DAY_OF_MONTH ),
variables.gCal.get( variables.gCal.HOUR_OF_DAY ),
variables.gCal.get( variables.gCal.MINUTE ),
variables.gCal.get( variables.gCal.SECOND )
) />

Note that I’m adding 1 to the month variable because in Java 0 = January, but in ColdFusion 1 = January.

And there you have it. Converting from Microsoft’s 01/01/1601 100 nanosecond based timestamps to something a bit more usable in ColdFusion.

Of course you’ll need a reverse of this process as well, and I’ve already got that for you:

01:  <cfset variables.tz = CreateObject( "java", "java.util.TimeZone" ) />
02:  <cfset variables.Calendar = CreateObject( "java", "java.util.Calendar" ) />
03:  <cfset variables.gCal = variables.Calendar.getInstance( tz.getDefault() ) />
04:  <cfset variables.gCal.set(
Year( arguments.Date ),
Month( arguments.Date ) - 1,
Day( arguments.Date ),
Hour( arguments.Date ),
Minute( arguments.Date ),
Second( arguments.Date )
) />
05: <cfset variables.bigInt = CreateObject( "java", "java.math.BigInteger" ) />
06: <cfset variables.expTime = variables.bigInt.init( JavaCast( "String", "#variables.gCal.getTimeInMillis()#" )) />
07: <cfset variables.expTime = variables.bigInt.divide( variables.bigInt.valueOf( "1000" )) />
08: <cfset variables.expTime = variables.expTime.add( variables.bigInt.valueOf( "11644473600" )) />
09: <cfset variables.expTime = variables.expTime.multiply( variables.bigInt.valueOf( "10000000" )) />
10: <cfreturn variables.expTime.longValue() />

You should be able to follow along based on my explanation earlier in the article. However there are a couple extra bits I’d like to point out.

The calendar is initialized to the local timezone. We don’t have to worry about converting back to UTC/GMT because the calendar function getTimeInMillis() will give you the number of milliseconds from Java’s epoch, which is in UTC.

On line 8 I’m adding the number of seconds from Microsoft’s epoch to Java’s epoch. You’ll also notice the line before that I’m dividing by 1000 to convert time from milliseconds to seconds and after I add the epoch difference I’m converting to 100 nanoseconds by multiplying by 10000000. So why not skip the conversion to seconds and simply add three more zeroes to the end of the epoch difference (making the time in milliseconds) and multiplying by 10000?

The reason is that there’s a problem with precision and the resulting value will vary by a few hundred milliseconds. Since the times I’m working with are in seconds, those extra milliseconds could be ignored. But by dividing, adding, and then multiplying like this I don’t get the millisecond variances. I’m sure there’s better stuff on the ‘net to explain precision with BigInteger objects if you’re inclined to investigate further.

Before you go I have one piece of information I learned while working on this particular topic.

ColdFusion treats all date and time values as if they are in the local timezone. Specifically, if your timezone has daylight savings then all dates are treated as if they have daylight savings, including GMT/UTC times which do NOT have daylight savings time. DateConvert() will not protect you from this issue. There’s a deeper explanation here.

My solution is to check if GetTimeZoneInfo().isDSTon is TRUE. If it is then I need to add (or subtract) 1 hour from my UTC time. Note that this is only needed when converting between UTC and local timezones with ColdFusion date and time functions. The above code specifically stays away from ColdFusion date and time functions so this isn’t a problem with my example. But it is something to keep in the back of your head when you start to notice dates are being calculated an hour off the time they should be.

A bit of vinegar to go with the SOAP.

Not long ago I wrote a 3-part series on using SOAP over HTTPS with ColdFusion. My final solution was to create Java objects directly, bypassing ColdFusion’s CFHTTP tag.

I have since found a subtle flaw with this implementation.

It’s not in the code, but in the  JVM. ColdFusion 8 ships with an older JVM. I recently upgraded our JVM to a more current version in an attempt to resolve a timezone bug. In doing so my SOAP application stopped working.

A little research led me to this article about a TLS bug in Java that could lead to a man-in-the-middle exploit. It appears the way I’m performing my SOAP operation triggers a TLS/SSL renegotiation when it receives a response from the external server.

The short answer is to add the following line to ColdFusion’s JVM arguments:

-Dsun.security.ssl.allowUnsafeRenegotiation=true

This does resolve the problem, but it apparently leaves the JVM vulnerable to MITM attacks. There is another bit of code in that article which shows how to change the allowUnsafeRenegotiation flag on-the-fly. I added this to my ColdFusion code, but changing the flag didn’t appear to have any effect.

If anyone else has played around with this particular problem I’d love to hear about it.

For now I”ve left the JVM in its vulnerable state as we only make HTTP requests from the JVM for a couple of applications and neither of them carry personally identifiable information.

University Website

This XKCD comic hit a little close to home. I work in higher education and the problem of what to put on our web site has been debated since it was first launched back in the early 90s.

The problem boils down to the question “Who is your target audience?

With a higher ed web site you’ve got several distinct target audiences; there is no singular target audience. You’ve got prospective students, current students, faculty, (administrative) staff, alumni, parents of students, media/press, and the rest which we lump into “visitors”.

Each of those groups have their own specific informational needs.

Prospective students want to know what they would be investing their time and money in when  choosing what university to attend. A web site targeting them will need to convey what their experiences with the institution will be like. This manifests on most higher ed web sites as things like a virtual tour and promotional materials about special events or accomplishments.

A current student doesn’t need any of that, they’re already on campus. They need more utilitarian things like course schedules, transportation information, faculty contact information, available resources like libraries, computer labs, the book store, dining, etc.

Alumni want to know what’s happening on campus, specifically things that make the institution (and thus their degrees) more prestigious. The university, in turn, wants to campaign to alumni for donations to help further grow the institution.

Faculty and staff tend to have more utilitarian needs like current students. Forms, procedures, policies, etc. as well as training and various HR-related operations.

Parents want to know their children are in a safe and healthy environment and that they’re getting their money’s worth.

Media/press want experts they can go to for quotes when stories happen. In turn the university wants to publicize all the really exciting and prestigious events happening on campus so the public (and the alumni, and the prospective students, and the parents) know what a great place it is.

The rest that we lump into “visitors” are usually coming from off-campus to attend some event being hosted on campus. They’ll want directions and parking information as well as contact information for those hosting the event.

There are areas of overlap, but (as the XKCD comic points out) there’s a lot of separation of the needs of each population.

So what do we do?

The first problem is the implication that the homepage of a web site is the whole of the web site. That the one web page must cater to exactly what the individual needs.

This is just not practical.

So what we do is break information down into logical components and then find a way to organize those components together in a way that caters to a given audience. The way my institution handles this is by creating “landing pages” for each audience. Each landing page is a glorified list of links to those components of the web site that the given audience might be interested in. We try to group links together to help make navigating a page of links a little easier. We also integrate a list of “most popular” links (based off web and search ogs, thus this list can change from time to time) in a prominent place on the page.

The homepage becomes something of a sign at a crossroads. We’ll put a few bits of news and campus events (those that would be of interest to a general audience) along side some links to landing pages. The user looks at the links, selects which audience they are part of, and continues down their road.

The problem is not everyone realizes they should self-select and will instead take off into the woods, not following any road at all and either get lost or get lucky. This is why we tend to stick a search button on every page to act as something of a North Star for those who lose their way. But there are still those who refuse to look up at the stars or follow the road, find a comfortable spot, and start to scream.

Can we do better?

I’ve often thought about creating a web site interface along the lines of 20-questions. A sequence of simple questions with a YES and a NO button. Answer each question and eventually you’ll get to the page you want. We remove everything that could possibly create confusion. No logos. No images. No text other than the question. Simple black background with white text and two buttons and that’s it.

I think such an interface would be very successful at getting users to the desired information, but I also think it would create a backlash from users who perceive such a thing as being extremely condescending.

So can we do better?

Some might suggest a portal.

The “guest” portal, which everyone would see before they log in, would contain all the marketing material you might give to prospective students and visitors. Then users would log in and the portal does the audience selection for them. Faculty get faculty-oriented content, alumni get alumni-oriented content, and so on. And with a portal you can target very specific audiences (all faculty members in the math department, all sophomores who are both in the SGA and greek philosophy, etc) without the user having to do anything. The server does all the heavy lifting.

Integrate the portal with admissions and student accounts. Allow prospective students of a particular major to communicate with current students of the same major to get their advice on the coursework. Allow alumni of a particular school or major to see what students of the same major are doing now. There’s an infinite number of possibilities, all of them positive.

So that’s it then, a portal.

Well.

Portals work if you have the time and manpower to manage it properly. You can be a little bit lazy with a static layout. It’s the difference between owning a pet goldfish and owning a pet monkey. Yes, you’re going to get far more out of your relationship with your pet monkey, but it’s going to be a much bigger headache as well, requiring far more resources than a goldfish.

I’ve rambled way too long. I could write 50 pages on this. You’ll have to live with being cut off and not having everything answered.

Two points:

1.) University web sites may seem to lack the specific information you want right on the homepage, but that’s because there are a lot of different needs that have to be met in such a small space. Put a little effort into using the site and it WILL work for you.

2.) There is no absolute solution for distributing content among so many different audiences through a  single web site. Figure out what you’re willing to invest into a solution and start educating yourself on the options and their pros and cons. Then pick the solution that best works for your situation.

Americans with Disabilities Act (ADA) and the Web

The United States Department of Justice has announced that it plans to create rules that apply the ADA to the web.

I’d like to begin by pointing you specifically to the section titled “Barriers to Web Accessibility“. It is a very good read with clear and specific examples of how web content can be inaccessible to users with disabilities.

I think this is a Very Good Thing. Not for any humanitarian reason, but for the very selfish reason that it will force developers to create better web sites. It will force developers to think “how will this affect users with disabilities” before they implement a web site design.

For example, there are quite a lot of stylesheets out there that make heavy use of !important rules. These rules override anything else that exists to style a given element, including user-defined stylesheets applied to web pages by users who have difficulty with low-contrast web pages (think gray text on a white background). !important rules are almost always a product of lazy developers who don’t take the time to learn the cascading order of CSS and resort to !important when they can’t figure out why their style won’t apply like it should.

However the are stickier areas that we’ll all have to deal with. For example the use of CAPTCHAs; those little scrambled words that you have to type into some box before you can submit a form. CAPTCHAs typically rely on images which are inaccessible to blind users. reCAPTCHA employs an aleternative, audio-based CAPTCHA along side it’s image CAPTCHA for such users. I’m a big fan of reCAPTCHA and suggest it to all web developers.

Another problem will be video content. Blind and deaf users won’t be able to access the full content of the video, however providing video captions or (more correct) a transcript of the video will solve the issue. It’s not a technological hurdle, just a tedious one. This web site specifically talks about YouTube and captioning as one way to solve this problem.

Mouse-driven events are yet one more problem area we’ll need to deal with. I myself make heavy use of drop-down menus with the CSS :hover pseudoclass. However, try tabbing through a web page yourself and you’ll see those drop-down menus don’t trigger. My approach to this issue has always been that the top-level items (those accessible to users who can only tab through the page) should link to pages from which the items in the drop-down are accessible. The drop-down provides a shortcut, but you are not limiting access to information for disabled users.

There are other areas to cover, but I’m not here to cover them all. In fact  I’m going to assume new areas will be created as technology progresses. The trick is to develop the mindset that as you develop a web site, or some web-based resource, to constantly ask yourself “is this accessible?”. If the answer is ever “no”, you need to find a way to make that answer “yes”. And, most importantly, follow through to make it a “yes” with vigor rather than apathy as I tend to believe developer apathy is the cause for the majority of inaccessible web sites out there right now.

Obfuscated Javascript Spam

Recently I’ve been receiving phishing-spam in the form of official-looking Amazon.com invoices. Curiosity got the better of me and I clicked on the phishing link. The page that came up was blank. A quick source view revealed a bunch of obfuscated javascript.

I wanted to see how it worked.

Here is a sample line of the code:

mGdujq[‘euvLaulm'[VvIf](/[muzLc]/g, EWgUi)] \
(ltY(mGdujq[[‘uhnKehsKcKaKpleo'[VvIf](/[oKhlE]/g, EWgUi)]](IuO)));

Now what’s going on here?

Well, plain as day in the source I see a couple very important lines that will help decode this. The lines are:

var EWgUi = ”;
var mGdujq = this;
var VvIf = ” + ‘replace’;

Armed with this information the line decodes easily before our eyes to

this[‘euvLaulm'[replace](/[muzLc]/g,”) \
(ltY(this[[‘uhnKehsKcKaKpleo'[replace](/[oKhlE]/g,”)]](IuO)));

What’s left to decode is the use of shorthand regular expressions. For example let’s look at this piece of code

‘euvLaulm'[replace](/[muzLc]/g,”)

‘euvLaulm’ is just a regular old string. You can call the string’s replace function in many different ways such as:

var str = ‘euvLaulm’; str.replace();
‘euvLaulm’.replace();
‘euvLaulm'[replace]();

The regex /[muzLc]/g simply matches any character within the square brackets. The full line of code calls for every match to be replaced with ” (an empty string) or in other words, to delete those characters from the string.

euvLaulm

The result is the string ‘eval’.

So the fully interpreted line of javascript reads as follows:

this[eval](ltY(this[[unescape]](IuO)));

Or in code more readable to my own eyes:

this.eval( ltY( this.unnescape( IuO )));

Strewn throughout the javascript are lots of variable declarations that create strings of seemingly random letters and numbers. Upon close inspection you might notice that there’s a pattern to the strings; they consist of alternating hex and non-hex characters. (A hex character or value is 0-9 and a-f).

Near the end of the code all these strings are concatenated and a series of replace operations are performed to replace all the non-hex characters with ‘%<hex character>’. The result is a string of URL escape sequences (a percent symbol followed by 2 hex characters). This string is stored in the variable IuO.

The URL escaped data is then unescaped to create an array of bytes (aka, a string, except the bytes aren’t all printable characters, so I can’t call it a string). This data is passed to the ltY function which performs a ( <byte> XOR 13 ) operation on each byte of data. The result is a string of HTML that creates a hidden iframe to some porn referral page and a META refresh that redirects the user to a male supplements web site after 4 seconds.

That was fun. A little sleuthing and puzzle solving. But what is there to take away from all this?

First was myself learning new ways to use and abuse javascript syntax such as ‘string'[function].

Also curious was the large amount of superfluous statements in the code. Variables would be created without initialization. They they’d be initialized to an empty string. Then they’d be set to their real value. Three statements to perform an operation that could be done in one. I imagine this, along with the use of random upper and lowercase letters used as variables AND data make the code more difficult to parse by hand (or by eye). But a few minutes and a bit of perseverance will overcome that. However that shows these types of scams are designed with the user who tries to perform a cursory inspection of the underlying code in mind.

The multiple decoding steps to arrive at the final “attack” HTML indicates to me the code is designed to circumvent string-comparison spam filters. That there was more than one decoding step and that there’s a bunch of extra, useless javascript (if/else blocks with one assignment to an unused variable) makes me wonder if this code was also created to circumvent spam filters that are javascript-aware. It’s working so far. My institution’s normally solid spam filtering software has let this one slip by twice in the last week.

And it’s nice to see what kinds of tricks spammers have up their sleeves.

SharePoint Designer

I recently installed Office 2010 and, along with it, SharePoint Designer 2010.

SharePoint Designer was a child of Microsoft’s WYSIWYG HTML editor FrontPage. Many people cut their teeth in HTML with FrontPage and were promptly told (rightly so) to ditch it for something better. But SharePoint Designer 2007, which is free for any Windows user to download, might actually not completely suck! What a bargain then, a nice WYSIWYG HTML editor that was free for anyone who operates on a Windows OS.

But SharePoint Designer was not the only child of FrontPage to come out of Redmond. There is another WYSIWYG HTML editor called Expression Web. However one must purchase Expression Web; it is not free. I wonder why. SharePoint Designer 2010 answers this question.

SharePoint is a product from Microsoft that tries to solve a lot of business problems. It is perhaps best to think of it as a business intranet on a single server. It handles collaboration, web publishing, portals, wikis, blogs, etc. It’s not a product, it’s a platform. And SharePoint Designer is intended to be used to develop content on SharePoint servers. But SharePoint Designer 2007 lets you create and edit standalone web pages. In essence you could replace FrontPage with SharePont Designer 2007. And don’t forget that it’s free! So that’s what a lot of people did.

Enter SharePoint Designer 2010 which comes with it a very large, very problematic restriction. It only lets you develop content for SharePoint servers. No longer can you manage just any old HTML content; if it’s not on a SharePoint server you can’t touch it with SharePoint Designer 2010.

So all those folks who have looked to SharePoint Designer as their FrontPage replacement are in for a rude awakening.

What’s the Microsoft solution? Expression Web 2010, on sale now at the cut-rate price of US$149.00.

So what free alternatives are available? Well, SharePoint Designer 2007 is still available for download. Maybe stick with that for now. Or you could experiment with Apatana or KompoZer. Or stick to a plain text editor (my preferred choice).

But this post isn’t about evaluating alternative WYSWYG HTML editors. This post is a simple warning to those of you who thought you had found your FrontPage replacement in SharePoint Designer. You didn’t.

Apple’s Latest

First a follow-up to my previous post abot MPEG-LA. The current MPEG-LA license was renewed through 2015. That means any change in pricing wouldn’t occur until then. So we’ve got at least another 5 years where we don’t have to worry about web video. What happens then? Who knows. There’s a nice breakdown of the MPEG-LA licensing in this article over at ZDNet along with some numbers on just how many patents and from how many countries that are involved with MPEG-LA. It’s mind boggling.

Safari 5

Safari 5 was released today.  Among its new features is one called Safari Reader which will recognize a web page that contains an article (or blog post), pull out the content of just the article and format it in a way that’s easier to read. Essentially it’s a chrome and ad remover. (“Chrome” in the sense of pretty, but useless bits of a page layout, not the browser.) This feature has interesting implications, especially if it’s popular enough to be copied by the other major browsers (which I think it will). The obvious issues are, a) stripping presentation control away from the content publisher and b) stripping revenue-generating ads away from the page.

But are they really issues?

The page needs to load, inside its intended chrome, before the reader option kicks in. Page views will still be generated. However if the ad is animated or relies on user-interaction the ad will, essentially, be useless. It will also probably work around those particularly lame ads that pop up over the content you’re trying to read. This will probably piss advertisers off.

As for the stripping away of chrome, I have mixed opinions. It makes very busy web pages much easier to read by removing distractions that are outside the content of the article. It might even teach some web developers that perhaps some simple layouts without a lot of distraction are actually preferred by users.

One problem I do have with the reader feature is that it doesn’t do enough to distinguish links from regular text. The content is displayed as black text on a white background with links colored in a dark blue. There are no underlines and no mouseover action to provide feedback that your mouse is over a link. Low-contrast users will especially find it difficult to identify links among the text.

Another problem I have is that it removes chrome within the article itself. Perhaps you’ve done something to highlight certain terms or use color to visually represent some relationship in the textual content. All that is stripped away. Reader-fied pages may actually lose some of its meaning. On the other hand it’s web development best practice to avoid using colors alone to represent such relationships in textual content as vision-impaired users would not be able to use such information.

Still, switching to reader mode requires an act by the end-user. Meaning if you don’t want to use it you don’t have to.

Safari has also created APIs to allow developers to create extensions to Safari. Perhaps ad blocker and noscript (or equivalent extension) will soon find a home on Safari.

iPhone 4

You’re probably aware by now that Apple has announced its new iPhone this week. Prices will be equivalent to the iPhone 3GS when it was released. The hardware is all new including a special glass for the front and back which is scratch, fingerprint, and impact resistant. There are two cameras (front and rear facing) which means video chat or video phone calls. The case is smaller. There’s a new processor. There’s about a second microphone for the purpose of noise cancellation. The camera will record HD video (720p) and you’ll be able to edit video right on the iPhone. But the biggest feature is probably the new screen. The screen boasts 326dpi and it’s around 300dpi that our eyes become unable to distinguish individual pixels. This means text will look smoother, photos will look crisper, and more information can be packed into a single screen. It was also noted that all existing iPhone apps, because they use Apple’s APIs, will be scaled up automatically to work with the new, higher-resolution display.

But most of Steve Jobs’ keynote address at WWDC was focused (as it should) at developers. Lots of numbers about the kind of revenue generated by the App Store and the money developers make off the App Store. The introduction of a new feature called iAds which allows developers to identify a space within their app where ads can be placed. Apple will handle putting the actual add into the application and developers get a cut of the revenue. On the one hand, this is very cool for developers who want to offer free or trial apps without having to give their work away. On the other hand it’s annoying because Apple controls what % of revenue the developer gets and there’s no competition so Apple can set any price or percent it wants.

Which is my biggest problem with Apple: they are too controlling. AT&T does not have the best network and if I purchased an iPhone I would prefer to have it on another network. But that won’t happen with the iPhone (unless you jailbreak it, which has its own pros and cons).

I also wish there was a micro SD card slot on the iPhone. The $100 difference between the 16 and 32gb just doesn’t make sense. If there was a micro SD card slot I could buy the 16gb, buy a 16gb micro SD card for $30 and put the $70 I’ve saved towards something else. Beer, for example.

And, c’mon Apple, make the battery replaceable. It’s quite possible that in a year or two we’ll have better battery technology and I could swap out an old iPhone battery for a newer, longer-lasting one. That’s certainly something users of the new HTC EVO 4G are hoping for.

At least, it appears, tethering will come to the iPhone 4 (without the need to jailbreak it). But AT&T wants to charge you an extra $10 a month for this luxury. That’s pretty lame, especially on top of the 2gb/month limit that’s been imposed on all AT&T customers. If you have a cap in place and people pay if they go over that cap, why have a tethering fee? My guess is that AT&T’s network is still too fragile and they’re trying to dissuade average users from tethering to keep their networks as free from congestion as possible. Which brings me right back to the argument that the iPhone 4 should be allowed on other networks.

At the moment I don’t feel the new iPhone is worth the hassles and limitations that come with it. I really like the new screen and dual cameras and the HD video recording at a high bitrate and being able to edit and upload the video from the phone itself. All of that is very cool. And no other phone has that right now. But there will probably be a lot of them that do a year from now. Do I wait? Probably.