Old Ideas, New Technology

Charles Babbage is widely credited coming up with the idea of programmable computers. His idea was sound but the supporting technologies needed to make programmable computers a practical reality were not yet available. Now fast-forward to the mid-20thcentury. In the 1940s and early1950s, the work of von Neumann, Eckert, Mauchly and others resulted in programmable computers using vacuum tubes as the basic building block for making binary decisions (current flow allowed=1, current flow blocked=0). Vacuum tube computers were huge, expensive devices that filled rooms, or even floors, of a building. In addition, they generated massive amounts of heat.

In the 1950’s, the transistor came into widespread use as the basic building block for binary switching. It was orders of magnitude smaller than the vacuum tubes it replaced and consumed a lot less power. Therefore, programmable computer designers could package much more processing power per square inch/cm into an equivalent amount of space as was consumed by the old vacuum tube models. In addition, instead filling up rooms, computers now shared rooms with other computers.

In the mid-1960s, integrated circuits began to appear in products and eventually resulted in the “computer on a chip” that contains millions of transistors, again making orders of magnitude strides in both processing power per square inch/cm and power consumption.

So do we really have new ideas? Or, do advances in technology act as enablers for ideas that may have been floating around for centuries?

Think about this while you are sitting in front of your von Neumann/Eckert/Mauchly/Babbage machine.

Standards and the Real World

While attempting to get my site to pass the W3C validation for compliance with the XHTML 1.0 Strict Standard, I encountered a classic “Theory versus Practice” issue. The target attribute available in prevous versions of the standard is no longer valid. This is a well-known issue and Google will find many hits on the subject. Here’s one example that more or less explains the rationale for the target attribute’s demise.

So, what does one do in order to have this capability in XHTML Strict 1.0? One writes a JavaScript to perform the function. Fine, you say. But do a Google and you will find many different JavaScript’s with many different approaches to providing this capability. So much for a standard way of doing things.

In addition, what do the majority of the “experts” say about this? Mostly, they do a cope out and say if you really must have this capability and you do not want to use JavaScript, you can always revert to the XHTML 1.0 Transitional Standard, which still allows use of the target attribute.

In case you are wondering why I’m making such a big deal of this, I’ve been using the target attribute for years now to control whether a new page loads on top of the current page or the browser opens a new window to display the new page. I’ve used the convention for years now that if the new page is in my domain (i.e., part of my site), I load it on top of the current page and if the new page is from an external domain, I open it in a new window.

This is especially annoying to me now that browsers with page tabs are available (e.g., Firefox) that allow one to take maximum advantage of the now “banned” target attribute.

So what did I do to solve my problem? I downloaded a JavaScript from the Internet that uses the rel attribute to direct where a page loads (e.g., use rel and set it to rel=”external” instead of using target=”_blank”. Works fine but now I have to constantly do hacks to the code generated by the WordPress Editor as it uses the target attribute.

I guess the old phrase “Standards are written to be Broken” I first heard years ago is still valid!

HarryB