A Portable Web Server on a USB Stick

While doing some recent work on developing a WordPress theme for a friend, I found myself in need of a portable web server; something I could plug into a computer, start up, and continue my development work. Since this was for WordPress I needed PHP and a database. With no sense of whether or not this would be possible. I charged in.

I began by collecting the components.

I chose PostgreSQL simply because it was a 50mb download while MySQL was some 300mb. Also, I’d never used PostgreSQL so, why not? Then I found out WordPress doesn’t support PostgreSQL out of the box, but quickly found PostgreSQL for WordPress.

I avoided installers and took the appropriate ZIP file for my distribution so I could simply unzip it to my thumb drive. I unzipped each in turn into it’s own directory off the root of the thumb drive giving me the following directories:

  • Apache24
  • pgsql
  • php
  • wordpress

Apache

First up, getting Apache up and running. This is pretty straightforward except that I would need to use relative paths instead of absolute paths if this was going to be a truly portable solution. I didn’t find much online in the way of what relative paths in httpd.conf would be relative to, so trial-and-error it was. The answer: it depends. Some of the paths are relative to the directory where httpd.exe lives while others are relative to whatever ServerRoot is set to in httpd.conf. ServerRoot itself is relative to httpd.exe’s location and this is typically set to one directory up from the Apache bin directory.

ServerRoot "../"

Next is setting DocumentRoot. I stayed with the default htdocs directory under ServerRoot and the DocumentRoot directive is relative to ServerRoot. However the Directory directive is relative to httpd.exe. Don’t forget this one.

DocumentRoot "htdocs"
<Directory "../htdocs">

And with those changes in place, Apache started right up!

PHP

PHP probably presented the biggest challenge as I started to get buried under DLL dependencies because PHP’s directory was not in the PATH environment variable. I could append that path on-the-fly using a batch script, but I wanted to avoid that if possible and, with some jumping through hoops, it is possible. First is what I added to the end of my httpd.conf to get PHP loaded.

LoadFile "../php/libpq.dll"
LoadModule php5_module "../php/php5apache2_4.dll"
<IfModule php5_module>
    PHPIniDir "../php"
    AddHandler php5-script .php
    DirectoryIndex index.php index.htm index.html
    Alias /wp "../wordpress"
    <Directory "../../wordpress">
        AllowOverride All
        Require all granted
    </Directory>
</IfModule>

Using Apache’s LoadFile directive helps solve any missing DLL errors from PHP when starting up Apache. I’ve included the one DLL I got a missing DLL error for. Should you encounter others make similar use of the LoadFile directory to fix things up. All the relative paths here are relative to DocumentRoot except for the Directory directive as seen previously.

On the PHP side of things I made a copy of the supplied php.ini-development file and renamed it php.ini. I uncommented extension_dir and the two extensions for PostgreSQL.

extension_dir = "ext"
.
.
.
extension=php_pdo_pgsql.dll
extension=php_pgsql.dll

But then I ran into some problems. PHP wasn’t finding those pgsql DLL files. I used Process Monitor to see exactly what paths were being looked at and found there were all relative to httpd.exe. I modified extension_dir to be relative to httpd.exe, but I was still receiving the same DLL errors. I second check of ProcessMonitor confirmed not only was the right path being accessed, but the file was actually being opened. What gives?!

Out of frustration, I eventually created an “ext” directory in the same directory as httpd.exe and copied the DLLs over into that directory and, for reasons I’m not entirely sure of, Apache started up without a problem and PHP was working. If you have any idea why this is and what I can do to make it so I don’t have to create this ext directory, please let me know. For now all I care about is that it worked.

DLL Hell

The first time I booted up Apache with PHP on my portable server I got an error about a missing DLL, msvcr110.dll. This is part of the Visual C++ Redistributable for Visual Studio 2012 which can only be downloaded as an installable, not a ZIP. To get this missing DLL I installed the runtime and then copied the DLL from my C:\Windows\System32 directory over to my Apache24\bin directory.

That fixed the issue, eventually…

Because if you’re mixing 32 and 64 bits then the location of this DLL after install might be different. If you’re running the 64-bit version of Apache and PHP, then you need the 64-bit DLL which, on a 64-bit Windows system, is located in C:\Windows\System32. If you’re running the 32-bit version of Apache and PHP on a 64-bit system, the 32-bit version of this DLL is in C:\Windows\SysWOW64. On 32-bit system it’s in C:\Windows\System32.

Good luck with that one.

PostgreSQL

I hadn’t worked with PostgreSQL before and had no clue how to create a database let alone start and stop it. I’m still not sure I do, but here’s what I did. First I needed to create a database. I opened up a command prompt and positioned myself inside pgsql directory and ran this command:

bin\initdb.exe --username <username> --pwprompt -A md5 -D data

This creates a data directory where all information and data about your database server will be stored. You could also think of it as initializing a database server instance. <username> will the the username of the superuser for the database. This can be anything you want. I used “postgres”. During the creation process you will be prompted to provide a password for the superuser account. Note: if you do not provide the username PostgreSQL will use the username of the account you’re logged into Windows under.

When the initialization completes you’ll be told exactly how to start the server.

"bin\pg_ctl" -D "data" -l pgsql.log start

This will start PostgreSQL as a background process. To stop it, use the same command, but change “start” to “stop”. With the database up and running, create a user for WordPress. This can also be done from the command line as follows:

bin\createuser.exe -U <superuser> -W -P wordpress

This creates a user called “wordpress”. You will be prompted to enter the password for the account. Don’t forget it.

Next you need to create the database that will store the data for WordPress. Again, from the command line:

bin\createdb.exe -O wordpress -U <superuser> -W wordpress

This creates a database called “wordpress” and assigns the user “wordpress” as the database’s owner.

We’re nearly there. Just one more step to go.

WordPress

With Apache and PostgreSQL running (start up Apache just by running httpd.exe from the Apache bin directory) it’s time to get WordPress running. In the configuration for Apache I gave above I alised my WordPress directory to “/wp” so open up a browser and go to http://localhost/wp. If you haven’t already installed PosgreSQL for WordPress you should see an error message that PHP is missing the MySQL extensions. No problem!

PostgreSQL for WordPress

You’ll find a readme.txt file inside the PG4WP ZIP file which will walk you through the process. Basically take the pg4wp directory that’s in the ZIP file and copy it into the wp-content directory of your WordPress install. Then make a copy of the db.php file located in the pg4wp directory and place it one level up, inside the wp-content folder. Your folder structure will look something like this:

  • wordpress
    • wp-admin
    • wp-content
      • pg4wp
      • plugins
      • themes
      • db.php
      • index.php
    • wp-includes

With that taken care of, go back to your web browser and go to http://localhost/wp. You’ll now be warned about a missing wp-config.php file. You can try the web interface or just make a copy of the wp-config-sample.php file and name it wp-config.php. You’ll need to then edit your newly created wp-config.php file to define the database name (wordpress), the database user (wordpress), and the password. Save the file and reload http://localhost/wp.

If all goes well you’ll finally see the WordPress install page. You will also (most likely) see a PHP warning message at the top about pg_query having failed. Ignore it. Fill out the form and press that install button. With luck you’ll get a success message. You’ll also get yet more PHP warnings. Again, ignore them, it’s okay. Login to your new, minty-fresh, WordPress site.

Starting and Stopping With Ease

At this point everything is up and running and everything is portable! All that’s really left is to find some way to easily start and stop the server with the click of a button. To do that I use a batch script.

@echo off
pgsql\bin\pg_ctl.exe start -D pgsql\data -l pgsql\pgsql.log
cd Apache24\bin
httpd.exe
cd ..\..
pgsql\bin\pg_ctl.exe stop -D pgsql\data

This will open a command prompt window and start PostreSQL first, followed by Apache. The window will remain open while the server is running. When I’m finished bring the command window up and press CTRL-C to tell Apache to shut down. I’m then prompted whether or not I want to terminate the batch job. I say NO which lets the script continue and shut down the PostgreSQL database.

A couple things to note. I found that I needed to be within the Apache24\bin directory before running httpd.exe to get around some missing DLL errors. I also have found that PostgreSQL seems to shut down some times for reasons I don’t know. When this happens I just stop and restart the server and everything is back up and running.

Finale

Congratulations, you have a portable web server. Go forth and develop!

Offset Column Chaos

I’m tired of confusing myself trying to write up a description of the layout. Here it is, make of it whatever you will.

Offset Column Chaos (Download)

The primary design idea was two columns where the widths of the columns had a maximum width, but the background color for each column would extend beyond the content to the edges of the viewport while the content, as a whole, remains centered to the viewport.

I took a few different approaches and eventually landed on using media queries to apply different approaches to the same problem depending on the situation with the viewport (are we at max-width yet, or not?) It works. I perhaps made it more complicated than most people really need it to be, but you can pick out the bits you don’t need to keep things simpler.

The CSS is heavily commented with variables and formulas to help keep everything pixel-perfect. It’s a layout like this one that makes me wish CSS had some kind of built-in support for variables. It’d make creating these kinds of layouts a lot easier.

Work is In Progress

I was asked to create a WordPress theme for a friend’s web site. I’ve since created the layout and theme and intend to release a stripped-down version of this layout sometime soon. In preparation for this I dusted off the blog and even found a new template that is more to my liking.

I also started doing what was never intended to be, but has since become an exhaustive write-up about what this new layout does and how it works. This has been a great exercise because it forced me to revisit certain design choices and I’ve discovered new ways to approach certain issues with the mechanics of the layout. I’ve found new solutions to problems whose original fix I wasn’t quite pleased with and also solved some minor bugs that I was initially willing to ignore.

I think I’m at the point where I started, which is cleaning up the CSS, adding comments, and writing about how it all works.

So I’ve got a new layout to give you soon. It’s not terribly complex, it has just two columns, but how those two columns function is something I think will interest others than just myself.

Amazon Silk

Today Amazon.com launched three new versions of their Kindle e-reader including a color tablet called Kindle Fire. In terms of hardware it’s about what you’d expect to see from something competing with the Barnes & Noble NookColor that’s launched a year later; it’s slightly better, but nothing that stands out as revolutionary.

But included with the Fire is a new web browser called Amazon Silk. And Amazon would like you to know this is a new and different browser from anything you’ve seen before.

So what’s the big deal about Silk? All you web requests go through Amazon’s servers which will handle retrieving all the content for you and optimize it wherever possible (e.g. a 3mb jpeg that’s sized to 300×200 pixels will be resized by Amazon before being sent to you as a 50k jpg). Because Amazon’s servers are sitting on some of the fattest pipes on the interwebs, it’ll be able to pull down and deliver all this content to your Kindle significantly faster and more efficiently than if your browser was doing the work all by itself.

Sounds pretty nice, but I’m a bit of a pessimist.

I wonder if the image resize example given in the above video might have unexpected consequences. For example there may be web applications that purposely load a large image into your browser and allow you to zoom in and out and move around the image. Will Amazon’s services understand that scenario and know to not shrink the image? I can think of a few examples that use image clipping and revealing the full image when hovering over the clipped area using CSS. Will Amazon know that the clipped area is not the only part of the image being displayed? Perhaps only known situations are optimized rather than Amazon using software to guess.

My biggest worry, however, is that all your web browsing is now going through a third-party. If Amazon is making requests on your behalf it will need to preset session cookies to those sites your browsing. What happens when you need to log into a system over SSL? Does Silk make the HTTPS request through Amazon’s? Does that mean all your passwords will be, at some point, on Amazon’s servers? What happens if they’re ever compromised? Does Amazon log and can they track your browsing history? What happens when I try to go to Barnes & Noble to buy something online through Amazon’s servers? Some web sites use session-hijacking prevention by comparing looking at things like your IP address. Amazon’s servers, as they point out in the video, are all over the world. Will my IP address stay the same throughout a session or will it change as requests are routed through different Amazon servers? Some web applications might break because of that.

Amazon must be logging your browsing with Silk. Imagine a scenario where someone posts some illegal material through Amazon Silk. Authorities will track down the IP which will lead them back to Amazon. Amazon must then have some mechanism to identify the user who posted the illegal material otherwise Silk becomes a giant anonymous proxy machine.

I’m very wary of Amazon Silk. I do not think I would never use it unless forced into a situation where no alternative was available. I don’t want some third-party sitting between me and the web sites I interact with, watching and recording everything I do.

Texture & Transparent Maths

This is the kind of post that would benefit greatly from the addition of screenshots, but I’m far too lazy so you’re going to have to put a lot of this into your head and create your own screenshots.

Now that’s out of the way, let’s talk about a situation that came up over the weekend. I was looking at a particular layout I’ve developed and lamented that the solid color background felt a little too empty. What it needed was some kind of texture to make it more visually interesting, but no so much that it takes attention away from the actual content of the page. What immediately came to mind were the linoleum tiles of an old grocery store I went to years ago which had solid color tiles with little dots of black and white color. I thought something like that might just pull off the trick of making things a little more visually interesting without taking away focus from the content of the page. So I needed to make some dots.

Continue reading

University Website

This XKCD comic hit a little close to home. I work in higher education and the problem of what to put on our web site has been debated since it was first launched back in the early 90s.

The problem boils down to the question “Who is your target audience?

With a higher ed web site you’ve got several distinct target audiences; there is no singular target audience. You’ve got prospective students, current students, faculty, (administrative) staff, alumni, parents of students, media/press, and the rest which we lump into “visitors”.

Each of those groups have their own specific informational needs.

Prospective students want to know what they would be investing their time and money in when  choosing what university to attend. A web site targeting them will need to convey what their experiences with the institution will be like. This manifests on most higher ed web sites as things like a virtual tour and promotional materials about special events or accomplishments.

A current student doesn’t need any of that, they’re already on campus. They need more utilitarian things like course schedules, transportation information, faculty contact information, available resources like libraries, computer labs, the book store, dining, etc.

Alumni want to know what’s happening on campus, specifically things that make the institution (and thus their degrees) more prestigious. The university, in turn, wants to campaign to alumni for donations to help further grow the institution.

Faculty and staff tend to have more utilitarian needs like current students. Forms, procedures, policies, etc. as well as training and various HR-related operations.

Parents want to know their children are in a safe and healthy environment and that they’re getting their money’s worth.

Media/press want experts they can go to for quotes when stories happen. In turn the university wants to publicize all the really exciting and prestigious events happening on campus so the public (and the alumni, and the prospective students, and the parents) know what a great place it is.

The rest that we lump into “visitors” are usually coming from off-campus to attend some event being hosted on campus. They’ll want directions and parking information as well as contact information for those hosting the event.

There are areas of overlap, but (as the XKCD comic points out) there’s a lot of separation of the needs of each population.

So what do we do?

The first problem is the implication that the homepage of a web site is the whole of the web site. That the one web page must cater to exactly what the individual needs.

This is just not practical.

So what we do is break information down into logical components and then find a way to organize those components together in a way that caters to a given audience. The way my institution handles this is by creating “landing pages” for each audience. Each landing page is a glorified list of links to those components of the web site that the given audience might be interested in. We try to group links together to help make navigating a page of links a little easier. We also integrate a list of “most popular” links (based off web and search ogs, thus this list can change from time to time) in a prominent place on the page.

The homepage becomes something of a sign at a crossroads. We’ll put a few bits of news and campus events (those that would be of interest to a general audience) along side some links to landing pages. The user looks at the links, selects which audience they are part of, and continues down their road.

The problem is not everyone realizes they should self-select and will instead take off into the woods, not following any road at all and either get lost or get lucky. This is why we tend to stick a search button on every page to act as something of a North Star for those who lose their way. But there are still those who refuse to look up at the stars or follow the road, find a comfortable spot, and start to scream.

Can we do better?

I’ve often thought about creating a web site interface along the lines of 20-questions. A sequence of simple questions with a YES and a NO button. Answer each question and eventually you’ll get to the page you want. We remove everything that could possibly create confusion. No logos. No images. No text other than the question. Simple black background with white text and two buttons and that’s it.

I think such an interface would be very successful at getting users to the desired information, but I also think it would create a backlash from users who perceive such a thing as being extremely condescending.

So can we do better?

Some might suggest a portal.

The “guest” portal, which everyone would see before they log in, would contain all the marketing material you might give to prospective students and visitors. Then users would log in and the portal does the audience selection for them. Faculty get faculty-oriented content, alumni get alumni-oriented content, and so on. And with a portal you can target very specific audiences (all faculty members in the math department, all sophomores who are both in the SGA and greek philosophy, etc) without the user having to do anything. The server does all the heavy lifting.

Integrate the portal with admissions and student accounts. Allow prospective students of a particular major to communicate with current students of the same major to get their advice on the coursework. Allow alumni of a particular school or major to see what students of the same major are doing now. There’s an infinite number of possibilities, all of them positive.

So that’s it then, a portal.

Well.

Portals work if you have the time and manpower to manage it properly. You can be a little bit lazy with a static layout. It’s the difference between owning a pet goldfish and owning a pet monkey. Yes, you’re going to get far more out of your relationship with your pet monkey, but it’s going to be a much bigger headache as well, requiring far more resources than a goldfish.

I’ve rambled way too long. I could write 50 pages on this. You’ll have to live with being cut off and not having everything answered.

Two points:

1.) University web sites may seem to lack the specific information you want right on the homepage, but that’s because there are a lot of different needs that have to be met in such a small space. Put a little effort into using the site and it WILL work for you.

2.) There is no absolute solution for distributing content among so many different audiences through a  single web site. Figure out what you’re willing to invest into a solution and start educating yourself on the options and their pros and cons. Then pick the solution that best works for your situation.

Americans with Disabilities Act (ADA) and the Web

The United States Department of Justice has announced that it plans to create rules that apply the ADA to the web.

I’d like to begin by pointing you specifically to the section titled “Barriers to Web Accessibility“. It is a very good read with clear and specific examples of how web content can be inaccessible to users with disabilities.

I think this is a Very Good Thing. Not for any humanitarian reason, but for the very selfish reason that it will force developers to create better web sites. It will force developers to think “how will this affect users with disabilities” before they implement a web site design.

For example, there are quite a lot of stylesheets out there that make heavy use of !important rules. These rules override anything else that exists to style a given element, including user-defined stylesheets applied to web pages by users who have difficulty with low-contrast web pages (think gray text on a white background). !important rules are almost always a product of lazy developers who don’t take the time to learn the cascading order of CSS and resort to !important when they can’t figure out why their style won’t apply like it should.

However the are stickier areas that we’ll all have to deal with. For example the use of CAPTCHAs; those little scrambled words that you have to type into some box before you can submit a form. CAPTCHAs typically rely on images which are inaccessible to blind users. reCAPTCHA employs an aleternative, audio-based CAPTCHA along side it’s image CAPTCHA for such users. I’m a big fan of reCAPTCHA and suggest it to all web developers.

Another problem will be video content. Blind and deaf users won’t be able to access the full content of the video, however providing video captions or (more correct) a transcript of the video will solve the issue. It’s not a technological hurdle, just a tedious one. This web site specifically talks about YouTube and captioning as one way to solve this problem.

Mouse-driven events are yet one more problem area we’ll need to deal with. I myself make heavy use of drop-down menus with the CSS :hover pseudoclass. However, try tabbing through a web page yourself and you’ll see those drop-down menus don’t trigger. My approach to this issue has always been that the top-level items (those accessible to users who can only tab through the page) should link to pages from which the items in the drop-down are accessible. The drop-down provides a shortcut, but you are not limiting access to information for disabled users.

There are other areas to cover, but I’m not here to cover them all. In fact  I’m going to assume new areas will be created as technology progresses. The trick is to develop the mindset that as you develop a web site, or some web-based resource, to constantly ask yourself “is this accessible?”. If the answer is ever “no”, you need to find a way to make that answer “yes”. And, most importantly, follow through to make it a “yes” with vigor rather than apathy as I tend to believe developer apathy is the cause for the majority of inaccessible web sites out there right now.

Obfuscated Javascript Spam

Recently I’ve been receiving phishing-spam in the form of official-looking Amazon.com invoices. Curiosity got the better of me and I clicked on the phishing link. The page that came up was blank. A quick source view revealed a bunch of obfuscated javascript.

I wanted to see how it worked.

Here is a sample line of the code:

mGdujq[‘euvLaulm'[VvIf](/[muzLc]/g, EWgUi)] \
(ltY(mGdujq[[‘uhnKehsKcKaKpleo'[VvIf](/[oKhlE]/g, EWgUi)]](IuO)));

Now what’s going on here?

Well, plain as day in the source I see a couple very important lines that will help decode this. The lines are:

var EWgUi = ”;
var mGdujq = this;
var VvIf = ” + ‘replace’;

Armed with this information the line decodes easily before our eyes to

this[‘euvLaulm'[replace](/[muzLc]/g,”) \
(ltY(this[[‘uhnKehsKcKaKpleo'[replace](/[oKhlE]/g,”)]](IuO)));

What’s left to decode is the use of shorthand regular expressions. For example let’s look at this piece of code

‘euvLaulm'[replace](/[muzLc]/g,”)

‘euvLaulm’ is just a regular old string. You can call the string’s replace function in many different ways such as:

var str = ‘euvLaulm’; str.replace();
‘euvLaulm’.replace();
‘euvLaulm'[replace]();

The regex /[muzLc]/g simply matches any character within the square brackets. The full line of code calls for every match to be replaced with ” (an empty string) or in other words, to delete those characters from the string.

euvLaulm

The result is the string ‘eval’.

So the fully interpreted line of javascript reads as follows:

this[eval](ltY(this[[unescape]](IuO)));

Or in code more readable to my own eyes:

this.eval( ltY( this.unnescape( IuO )));

Strewn throughout the javascript are lots of variable declarations that create strings of seemingly random letters and numbers. Upon close inspection you might notice that there’s a pattern to the strings; they consist of alternating hex and non-hex characters. (A hex character or value is 0-9 and a-f).

Near the end of the code all these strings are concatenated and a series of replace operations are performed to replace all the non-hex characters with ‘%<hex character>’. The result is a string of URL escape sequences (a percent symbol followed by 2 hex characters). This string is stored in the variable IuO.

The URL escaped data is then unescaped to create an array of bytes (aka, a string, except the bytes aren’t all printable characters, so I can’t call it a string). This data is passed to the ltY function which performs a ( <byte> XOR 13 ) operation on each byte of data. The result is a string of HTML that creates a hidden iframe to some porn referral page and a META refresh that redirects the user to a male supplements web site after 4 seconds.

That was fun. A little sleuthing and puzzle solving. But what is there to take away from all this?

First was myself learning new ways to use and abuse javascript syntax such as ‘string'[function].

Also curious was the large amount of superfluous statements in the code. Variables would be created without initialization. They they’d be initialized to an empty string. Then they’d be set to their real value. Three statements to perform an operation that could be done in one. I imagine this, along with the use of random upper and lowercase letters used as variables AND data make the code more difficult to parse by hand (or by eye). But a few minutes and a bit of perseverance will overcome that. However that shows these types of scams are designed with the user who tries to perform a cursory inspection of the underlying code in mind.

The multiple decoding steps to arrive at the final “attack” HTML indicates to me the code is designed to circumvent string-comparison spam filters. That there was more than one decoding step and that there’s a bunch of extra, useless javascript (if/else blocks with one assignment to an unused variable) makes me wonder if this code was also created to circumvent spam filters that are javascript-aware. It’s working so far. My institution’s normally solid spam filtering software has let this one slip by twice in the last week.

And it’s nice to see what kinds of tricks spammers have up their sleeves.

SharePoint Designer

I recently installed Office 2010 and, along with it, SharePoint Designer 2010.

SharePoint Designer was a child of Microsoft’s WYSIWYG HTML editor FrontPage. Many people cut their teeth in HTML with FrontPage and were promptly told (rightly so) to ditch it for something better. But SharePoint Designer 2007, which is free for any Windows user to download, might actually not completely suck! What a bargain then, a nice WYSIWYG HTML editor that was free for anyone who operates on a Windows OS.

But SharePoint Designer was not the only child of FrontPage to come out of Redmond. There is another WYSIWYG HTML editor called Expression Web. However one must purchase Expression Web; it is not free. I wonder why. SharePoint Designer 2010 answers this question.

SharePoint is a product from Microsoft that tries to solve a lot of business problems. It is perhaps best to think of it as a business intranet on a single server. It handles collaboration, web publishing, portals, wikis, blogs, etc. It’s not a product, it’s a platform. And SharePoint Designer is intended to be used to develop content on SharePoint servers. But SharePoint Designer 2007 lets you create and edit standalone web pages. In essence you could replace FrontPage with SharePont Designer 2007. And don’t forget that it’s free! So that’s what a lot of people did.

Enter SharePoint Designer 2010 which comes with it a very large, very problematic restriction. It only lets you develop content for SharePoint servers. No longer can you manage just any old HTML content; if it’s not on a SharePoint server you can’t touch it with SharePoint Designer 2010.

So all those folks who have looked to SharePoint Designer as their FrontPage replacement are in for a rude awakening.

What’s the Microsoft solution? Expression Web 2010, on sale now at the cut-rate price of US$149.00.

So what free alternatives are available? Well, SharePoint Designer 2007 is still available for download. Maybe stick with that for now. Or you could experiment with Apatana or KompoZer. Or stick to a plain text editor (my preferred choice).

But this post isn’t about evaluating alternative WYSWYG HTML editors. This post is a simple warning to those of you who thought you had found your FrontPage replacement in SharePoint Designer. You didn’t.

The Elephant in HTML5’s Room

The new HTML5 video element will allow content developers to embed video directly into their web page without having to utilize third-party plug-ins such as Adobe’s Flash. This is a good thing.

Except there’s one, big problem. The HTML5 spec does not specify what codecs need to be supported by compliant browsers. It is left up to the individual browsers what codecs they do and do not support.

Perhaps the most popular video codec on the street is H.264. H.264 is a great codec that balances the need for small filesize with good picture quality. It’s no wonder that it’s used in newer HD technologies such as Blu-ray. It’s also used in almost all new HD video camers for both personal and professional use. It is also the growing codec of choice for distributing video online.

There’s just one problem: MPEG-LA.

MPEG-LA is a firm that manages patent pools. Patent pools are exactly what they sound like. A company or group of companies will take their patents and put them together into a pool. The patents in a given pool usually have something to do with a specific product, in this case the H.264 video codec. A company can then license all the patents in the pool as a package with respcet to the related technology. MPEG-LA is who you have to pay if you want to use the H.264 video codec. Any DVD or Blu-ray player you might have, as well as any video camera or other device that supports H.264 almost certainly includes a license from MPEG-LA for the use of H.264 on that device.

And right now the MPEG-LA licensing allows free use of H.264 for the purpose of streaming video over the internet so long as you don’t charge any money for it. This is why YouTube has so far received a pass on paying hefty MPEG-LA licenses. Companies like Netflix, however, do have to pay a license as Netflix users are paying for the content.

That could change. The current license ends December 31, 2010. Come January 1, 2011, MPEG-LA could require companies like YouTube or even personal web sites that stream H.264 encoded content to pay a license fee. And for how much? Who knows? That’s entirely at the discretion of MPEG-LA. MPEG-LA could allow their free license to continue the way it has, allow H.264 to become even more popular with the web, and then in 2 or 3 years start asking for hefty license fees. At that point many companies will be so reliant on the codec they’ll have no choice but to pay.

Some people don’t like this arrangement. Some people have set out to develop a new codec that delivers comparable quality to filesize as H.264, but is not encumbered with patents. Enter Theora.

From the same people that brought you Ogg Vorbis comes Theora. Theora is based upon the VP3 codec developed by On2 Technologies. On2 released VP3 into the public domain in the early 2000s. Theora improved upon VP3 to what it is today. Theora’s development was done with an eye towards steering clear of existing patents. The result is a video codec that is unencumbered by patents and free for anyone to use.

Sounds great! Why doesn’t everyone use Theora instead of H.264?

There’s a few of reasons.

The biggest issue is the lack of an actual body or company that’s willing to indemnify users of Theora from patent lawsuits. This, in turn, has lead to the spread of considerable FUD about Theora with hints of some major company just over the horizon laying in wait and ready to strike at any moment the first time a company with a lot of money tries to use the codec in one of their devices. Recently Steve Jobs, the man at the head of Apple, has made statements to that effect. The problem is these threats have been looming for years and yet nobody has ever come forward to challenge Theora. Partly because “they” might be waiting for a big company with lots of money to start using Theora so they can file a lawsuit and win lots of money. But I believe they simply don’t want to risk losing such a challenge in a court of law and proving Theora is free and unencumbered. To do so would vindicate Theora and make it an attractive option for those who don’t want to pay MPEG-LA license fees.

Another reason is that some major companies, including Apple, are part of the MPEG-LA patent pool. They stand to make money by popularizing H.264. To promote a codec they don’t have a hand in would be counterproductive to their revenue stream.

A third reason is the argument that Theora’s video quality isn’t as good as H.264. Early Theorya 1.0 encoders did not produce great results, but Theora 1.1 has proven to be nearly as good, if not equal to H.264, especially at low bitrates. A subset of this argument is the lack of hardware decoders for Theora. This is a more relevant argument. H.264 decoder chips are available and can be found in a wide variety of products including Apple’s iTouch, iPhone and iPad as well as virtually all video recorders produced in the last 5 years.

Now back to HTML5 and its video element.

The people behind open source browsers like Firefox and KHTML don’t have the resources to provide an H.264 license for all their users. Thus these browsers are unable to support that video codec natively. On the other side you have companies like Microsoft and Apple who refuse to include native support for Theora because they worry about patent liabilities in the codec or stand to make money by pushing the use of H.264.

The result is that there is no single video codec that the will be universally supported when HTML5 is finalized. Which means developers that wish to make use of the HTML5 video element will either need to keep two copies of each video (one in H.264 and the other in Theora) or they have to request that the end user install some third-party plugin.

I’m willing to guess that the vast majority of developers will simply continue to do what they already do, which is to use Flash to play videos within their web site.

But wait! Apple has declared itself to be Flash-free. Apple’s iPad, iPhone and iTouch will not support Flash in any form. So developers who wish to display video content on those devices will need to use the HTML5 video element.

So what are developers to do?

Sadly, I believe the majority will do a bit of client detection and deliver an H.264 to Apple products, and the rest will get a Flash movie that will play back the same H.264 stream. Flash and H.264 both win. Theora and open-source lose.

Or maybe not.

Google bought On2 Technologies last year and plans to open-source On2’s VP8 codec. This is the same family of codecs that VP3, the basis for Theora, comes from. But VP8 is five generations removed from the aging Theora (meaning it should be better). The result could be a new codec, backed by a huge company (Google), and used on the largest video streaming web site (YouTube, which is owned by Google). We could see over the next couple years a huge shift towards VP8, leaving Theora and H.264 behind.

But I predict we’ll see a blend of all codecs. Theora will remain a staple of open source browsers like Firefox. Apple and Microsoft will keep pushing H.264 and Google will try to push VP8 through YouTube. But Google will be forced to retain H.264 and Flash video streams to support Apple products and legacy devices that don’t have the power to decode HD video streams.

I’d like to see VP8 be put into the public domain. I’d like to then see Xiph take VP8 and create Theora 2.0. From there Firefox, WebKit, and Opera start supporting Theora 2, YouTube starts moving towards just VP8/Theora 2 video streams which will then be supported by Flash. Apple is left with the choice of either building in support for these codecs or creating its own video sharing site that supports only H.264 and that native YouTube applet on your iPhone goes away.

Isn’t it nice to want things?