An old trick is rediscovered: anti-dns pinning.
But now it’s been found that simply requesting a page on a closed port (www.example.com:81) will trigger the browser to re-cache that DNS entry, making this method of attack much easier and faster, to the point that some people have already built applications that will scan the victim’s entire internal network. From there the program might ID a specific server based on unique default files that come with a default install of IIS or Apache, or maybe look at the 404 error page and parse out the name and version of the server. From there it could launch an attack on specific vulnerabilities for that server, or that web application, leading to a compromise of the system. All of this is automated. Running quietly in some hidden iframe while you continue to browse the internet.
“2.0” sites are going to get screwed in this whole deal. They’ll either have to rely on word-of-mouth (“no no, that site’s okay”) or users that have blind trust. Perhaps we’ll see people or groups start to maintain “whitelists” – sites that are okay to trust. Maybe something like an RSS feed that’s updated hourly. But then you look at the history of spam blacklists and start to wonder how that’d really work out.
What will be really interesting is to see if there’s a backlash. Users might start to realize just how much faster and more efficient their browsers run and better their experience with the site is when all the extra scripty bits are disabled. Perhaps we’ll see a push away from crazy embedded elements and back to a simpler means.