Software & IT

How the Telecommunications Act of 1996 Brought Us Donald Trump

Over two years ago I started writing a treatise on the effects of the Internet on music, attempting to debunk the popular theory held among technologists like myself that the Internet is an inherently democratizing force that empowers individuals. I collected charts and graphs, pulled industry data, and created a very compelling storyline.

I even began to compile a global database and build a visualization tool to present all the hundreds (thousands) of “independent” record labels that are actually branches of Sony, Warner, or Universal by tying together the hierarchical structures that are employed to disguise the fact that your favorite cuddly “indie” band is actually just a product squeezed out of a tube or a guy who “sold out to buy in.”

Unfortunately, the deeper down the rabbit hole I fell, the more depressed I became about the situation. I actually became so depressed about it that I abandoned the article, picked it back up a year later, got even more angry and depressed, and put it down again. My research not only caused me to despair and not finish my article, but also made me almost give up on both technology and music as well. It was that bad.

Maybe the truest statement ever made is: “The truth will set you free, but first it will piss you off.”

I’ve tried to get it finished several times, but at this point it’s become the Third Rail of Depression and Futility and I can’t bear to look at it. Also, the data collected in 2014 are starting to age, and I have absolutely no desire to re-mine the data again.

This article, therefore, will perforce need to be written at a sprint, because the longer I steep in this reality, the more I despair. I will attempt to annotate as I can, but I’m not here to prove anything, merely to lay out a line of reasoning that others can follow and bolster (or tear down) with facts, and instead of drilling into source data, spreadsheets, and databases, I’ll probably just quote Wikipedia and leave enough links at the bottom that you know I’m not spinning this yarn out of whole cloth.

Instead of laying out irrefutable proofs, I intend merely to connect some dots with a reasonably thick line. Sorry about that, but I got a life too.

In 1990 I graduated from Texas A&M University, aptly described by one of my dearest friends as “a hotbed of conservatism,” from the Lowry Mays School of Business. After graduating, I attended the Red McCombs School of Business at the University of Texas at Austin where I received an MBA in Information Systems Management. As it turns out, those two individuals – Lowry Mays and Red McCombs – will become instrumental in this story.

I was a big fan of Milton Friedman (still am, generally speaking) and a True Believer in capitalism as the engine of enriching the masses (this view has since become more nuanced). I called myself a small-L libertarian, wanting little to do with the quacky Lyndon Larouche party, but finding value in small-L libertarian principles: fiscally limited government, small defensive military, an absolute defense of civil liberty, generally open borders and free trade, and the use of progressive taxation (Friedman’s “negative income tax”) to provide a minimum guaranteed income instead of highly invasive and inefficient government services like public housing and food stamps.

I bring all of this up to share that I am not by birth a radical anti-capitalist, but actually someone who came into this situation with views far different than I have now been beaten by reality into accepting. I am still no radical anti-capitalist, but my eyes are a lot more open than they once were.

First I must dispel a notion:

“The Internet is inherently decentralizing and democratizing.”

As one of the very earliest Internet developers (I started building complex database back-ended web sites in 1995) and a small-L libertarian, I gulped this Kool-aid the moment it was offered. It fit exactly into my ethos. As a technologist, I could see that the Internet had the potential to decentralize everything. And as a musician, I was sure that a New Era of Music was upon us, one where a musician could simply create and distribute music directly to fans without middlemen, and that this empowerment would destabilize and destroy the record label business which had done such terrible disservices to the artists it supposedly represented.

Without getting into the charts, data, spreadsheets, and suicidal ideations from my abandoned article, let me just cut to the chase: this view was balderdash, at least over a two-plus-decade timeframe. In a nutshell, the top 1% artists now 77% of all music revenue, the top 3 labels control roughly 90% of the music you hear, access to the key new music discovery media (radio, satellite, and curated playlists) has narrowed tremendously, and more money than ever before is required to build and maintain an audience. If you doubt all this, write your own article, because I couldn’t finish mine without wanting to drink Drano.

In 1994 nobody as far as I knew was using the term “disruption,” but what I have learned from my work in tech, and from my research, is that disruption is really mostly analogous to a game of 52-pickup: a change comes along that suddenly seems to throw all the playing cards in the air. For a brief moment in time, the former holder of all the cards is destabilized, and everyone in the room has a narrow window in which to grab a few cards while they’re still in the air. Some individuals get lucky, and grab enough cards to empower themselves before the cards are grabbed back up by their original holders. These lucky individuals can then serve as useful anecdotes to the world about how the “disruption” has “empowered people.” Everyone knows the story of the musician that got famous by building a fanbase on MySpace and never signed to a record label. Sadly, far too many of us thought that was a trend not an anecdote.

Even sadder, at least some of these anecdotes were simply propaganda stunts by massive Internet businesses to market their brand. For example a particular Internet giant tried for years to spin the story of how their social network was helping to make a particular “artist” quite famous (3+M social followers!), when really all they did was push this person onto everyone’s social feed and hope for the best while nothing actually happened in that artist’s musical career. Heck, the “artist” never even completed a full album of music. Still, millions of people saw the story and got hope, including me.

Silly, silly me.

I’m not here to bash disruption per se. My view is that disruption, like the technology that enables it, is value-neutral. The point is that it is not inherently value-positive, as so many of us technologists cling to as if it were religion.

So the Internet did not tear down the evil record labels as predicted. In fact the overall market for commercial radio increased 13% in the decade 1998-2008. The Internet also did not destroy radio as predicted. The power of radio has surely diminished somewhat since 1994, but the current and continuing influence of radio on music is almost impossible to overstate. At least as of 2014 (when I collected data for my aborted article), terrestrial radio was still the #1 source of new music discovery.

Let me repeat that: 20+ years after the invention of the modern Internet, FM radio is still the #1 way that people discover music. And the #2 way people discovered music is through word of mouth — in other words from a friend who probably discovered the music where? On radio.

So I will say it again: it is almost impossible to overestimate the influence of terrestrial radio on new music discovery.

Radio is not dead. Not even close.

When researching my previous article, this was where the rabbit hole opened up and swallowed me whole. Because something else had changed in the intervening years since Marc Andressen brought us the Mosaic browser – in fact, this giant change was largely because of that browser.

That change was the Telecommunications Act of 1996.

Before we fall down that rabbit hole together, I want to sidestep for a moment, because something else happened in 1996: my rock band from college got its first single played on a top radio station in a top-ten market by one of the nation’s best-known DJs, a man we all call Redbeard. How this happened wasn’t luck or payola: we made a demo, took a CD to the radio station, met Redbeard (cool guy), he listened to the song, liked it, and agreed to give it some spins.

That song didn’t make us famous. But Redbeard and his peers at the competing station broke a lot of influential North Texas bands during this period of the 1990s: the Nixons, the Toadies, Tripping Daisy, Edie Brickell and New Bohemians, and many others got their start on the radio exactly this way – by taking a demo to a local station and getting spins – in drive-time rotation in a Top 10 music market. That’s a Big Fucking Deal. It’s also a feat that is practically impossible for an unsigned local artist to pull off today, for reasons we shall soon learn.

Spins on radio means fans at shows. More fans at shows means more requests for the song on radio. More requests means more spins, more spins means more fans — then better shows at better venues, etc.. Eventually the effect is spillover onto other stations and into other neighboring markets. It’s an easy to understand, meritocritous, “bottom-up” virtuous cycle that Made Music Great from 1970-1996, the Golden Age of FM Radio for anyone old enough to remember it. It’s a process that played out for years on hundreds of stations across the country, surfacing local talent and exposing it to the wider world.

This Golden Age was made possible only because of the existence of largely independent local radio stations employing independent DJs to curate a playlist for the local demographic and to develop the local market. This was the formula that had served music listeners since the advent of radio, and that, for all intents and purposes, ended after 1996 and has yet to be replaced with anything.

Particularly on the Internet, where “local” goes to die.

As a young technologist in 1996, I was on the front line of Internet hype. A few years prior, the “Internet” was still something for nerds wearing propellor hats like me. Suddenly, the light bulb came on, seemingly for everyone at once. ISPs like UUnet previously thought to serve a tiny potential market of only hackers started doubling in valuation every few months as people realized “everyone’s going to want to get online!” Startups like Amazon and eBay and Google – and Pets.com – achieved insane levels of hype. The Internet, it was thought, would eat everything. And it probably will, eventually.

It became relatively obvious early on that the Internet as well as other recent disruptive technologies of the time like cellular and cable would radically change telecommunications. In the wake of this sudden disruption, the Telecommunications Act of 1996 was passed. This landmark piece of legislation was the most important piece of legislation affecting telecommunication of all kinds since the 1934 Communications Act which created the FCC.

The purpose of the 1996 Act was stated as:

to provide for a pro-competitive, de-regulatory national policy framework designed to accelerate rapidly private sector deployment of advanced information technologies and services to all Americans by opening all telecommunications markets to competition

which, in 1996, sounded like a good idea to a young, small-L libertarian. And in fact I’d guess that there are many aspects of the Act that have been good for the country and its inhabitants. Probably. Maybe.

The Act did many things, mostly oriented around de-regulating the boring communications infrastructure maintained at the time by AT&T and the Baby Bells to try to re-orient it around serving the needs of the coming Internet market. That was the part of the bill that everyone was discussing and arguing about at the time. However, unbeknownst to most, it also significantly de-regulated radio, television, and print media for the first time since 1934.

Under the 1934 Communications Act, radio was held to be a (at least quasi) public good – like clean air, street lighting, or libraries. 1934 was a time where most of the print news in the country was controlled by a small number of political machines, like that of William Randolph Hearst, which used sensationalism and “yellow journalism” to promote political agendas. The goal of the 1934 legislation was to prevent such monopolization from taking hold in the nascent radio market with its limited space on the dial for competition. The mechanism was straightforward: limit the number of stations any entity can hold in any market, and the overall number of stations that any entity can hold. It also limited the ability for media channels to “forward integrate” into radio – to prevent a company like RCA from controlling the distribution channels for its competitors’s music, and to prevent a company like Hearst’s from commandeering the airwaves for political purposes. (Hearst, originally a Roosevelt supporter, would turn strongly against FDR after the passage of this bill).

In other words, radio was kept decentralized with the goal of maximizing local and independent access.

It is important to understand that at the time, it was not unusual for most people to have access to a small number of radio stations. The great 20th century urban migration was not complete, and radio was nascent and capital-intensive. It is for this reason that the FCC was created to ensure that the fledgling technology was deployed in a way that prevented monopolization.

It is from this philosophy – radio as a public good – that later notions like the “Fairness Doctrine” and “Payola” sprang. In the 1940s, the FCC held that radio programming must present opposing views on controversial material instead of only presenting one side. This was the so-called “Fairness Doctrine.” Likewise, rigor was applied to keep record labels from buying access to stations (and crowding out competitors) by rigorously enforcing laws regarding Payola – a law still (theoretically) enforced today. Under Payola laws, a radio station can play a song in exchange for money, but must disclose the song as “sponsored content,” and cannot count the spin in the song’s ratings.

The Fairness Doctrine was struck down in 1987, but the anti-monopoly laws stood until the passage of the Telecommunications Act of 1996, which relaxed or removed the restrictions against the number of stations any one entity could own (both in one market and overall), and removed or relaxed restrictions against forward integration, allowing media creators to own networks, and vice versa.

The effect was that the 1996 Act, which was supposed

to provide for a pro-competitive, de-regulatory national policy framework designed to accelerate rapidly private sector deployment of advanced information technologies and services to all Americans by opening all telecommunications markets to competition

actually did no such thing at all, at least not in radio and media. The centralization in media has been dramatic: in 1983, 50 companies controlled 90% of US media – that number is now 5 (Comcast, Walt Disney, News Corp, Time Warner, and National Amusements) with almost all of the consolidation occurring since the passage of the 1996 Act. In 1995, companies were forbidden to hold more than 40 radio stations, total – by 2003, only eight years later, one company owned over 1200 stations, including having outright monopolies in many markets where they own and program every station on the dial. Where once FM radio was a unique place to discover new, unusual, and local music, today 80% of playlists match.

And – even though Payola is purportedly still a crime, these media empires enter into profit sharing agreements with the major record labels (who control 90%+ of the music you will ever hear). Folks, this is Payola writ large. Good luck if you’re actually indie.

The degree of consolidation has been breathtaking. I think this infographic sums it up best.

That the Telecommunications Act of 1996 increased efficiency in radio is undeniable. It is estimated that as many as 10,000 people have lost their jobs in radio broadcasting since the Act was passed – even as the total number of stations broadcasting in the USA has increased. This has happened because there is often absolutely no local “station” at your “local” station, but just a transmitter, broadcasting a program that sounds completely local, but which is programmed by Who Knows from an office in Who Knows Where. This has allowed for massive efficiencies of scale: it now takes approximately 0.5 people to run a “local” radio station.

As a small-L libertarian, the notion of efficiency has a certain nice ring to it, until you ask, “what, really, is the product, and how have we saved and benefitted as a society?” The answer to that question is complex, but in a nutshell, the product is simply advertising: advertising for products, sure, but also advertising for big-time-record-label music through these profit-sharing agreements, and – thanks to the elimination of the Fairness Doctrine – advertising for a specific political point of view. The notion of radio as a public good is long gone.

This was basically where my previous attempt to write a similar article about the music business broke down. Because once you, the small-L libertarian technologist-musician, understand that:

  1. Even though the Internet supposedly “changed everything” you still need good old radio to build local markets but
  2. You’re basically cut out of local radio altogether
  3. By the major record labels we said we “disrupted”

you want to just give up and make that nice big Drano cocktail. It’s hopeless.

I remember some of the discussion when the Telecommunications Act of 1996 was passed. As a technologist caught up in the pre-bubble phase of Major Internet Hype, it was “clear” that the future of radio was dead. “Soon” we would be streaming an infinite number of stations digitally. The use of AM and FM waves would “soon” be a dinosaur. With digital, an infinite number of “channels” would be possible, so the idea that radio was a limited “public good” in need of strong mandated decentralization seemed instantly obsolete. So the idea that we should deregulate radio and allow it to blend with media companies had a certain air of logic to it, since radio itself was going away to be replaced by the Internet. And after all, “less regulation equals more competition,” right?

This all might, in fact, happen one day. But satellite radio and streaming have not displaced terrestrial AM and FM radio. Most people still listen to radio in their car – and 25 years in, the Internet still really hasn’t penetrated the vehicle to eliminate broadcast radio. It is displacing it, somewhat – but the number of radio stations in the USA since the 1996 Act was passed has not shrunk – it’s grown. And, since the Act allows conglomerates to own and operate virtually without limit, they’re chewing up space on you satellite dial, too – and have excellent control over the Spotify playlist you’re probably listening to.

And, I’ll add, in much of what we call “flyover country,” far removed from urban culture, AM radio is still the only thing you can reliably find on the dial due to its superior reach.

In the early 1970s, Lowry Mays and Red McCombs formed Clear Channel Communications when they began to acquire failing radio stations and return them to profitability, typically by changing their formats to less operationally-expensive formats like religious programming or talk radio. By the mid-1990s Clear Channel owned 40 radio stations and over a dozen TV stations.

Conversion to religious and talk-only formats was not profitable because they were products with greater demand – they were profitable because they were products with lower cost. As researcher Jackson R. Witherill writes:

Jeffrey Berry and Sarah Sobieraj of Tufts University interviewed a number of radio executives in 2011 and they found common ground on the sentiment that “the surge in talk radio programming was supply driven, not demand driven” (Berry and Sobieraj 2011). This means that as individual stations within national corporations became unprofitable, switching to talk radio programming was an attempt to stay in business through producing inexpensive and nationally broadcast programs.

The rise in the number of talk radio stations has meant that syndicated programs, which have become increasingly common, have gained a higher level of exposure through the creation of more stations airing the same material in new locations. This increased exposure results in higher ratings for the show.

(emphasis mine)

In short, the conversion of radio to talk formats was less about listeners demanding that material, and more about the low costs of production and economy of scale of delivery. In short, they’re saying what we musicians have known for a long time: generally speaking, people like whatever they’re fed by the radio. Therefore the curator – or program director – or talk show host – has a powerful shaping role.

Enter the Telecommunications Act of 1996, which removed the restrictions on station ownership, and suddenly Clear Channel’s acquisition + talk-format conversion strategy can be done at scale. Along with other hungry conglomerates, Clear Channel started gobbling up independent radio stations en masse. In three years the company had grown 10X – to over 400 stations. In five more years, Clear Channel would triple its radio reach again, growing to over 1200 stations – as well as 41 television stations and over 750,000 outdoor advertising displays. Clear Channel is now known as iHeartMedia, which is still the nation’s largest holder of radio stations and, through it’s subsidiary (Premiere Networks) is the largest producer of syndicated talk radio.

Suddenly, giant radio conglomerates like Clear Channel / iHeartMedia were able to push syndicated talk radio formats completely across the country, coast-to-coast. Gone were the local DJs and commentators, in were the preprogrammed music stations and religious and celebrity radio talk show hosts. Premiere even created “Premiere on Call” – a service that offers fake callers to call in shows that fit the story or agenda of the show.

As a by-product of this change to religious and talk radio, this period in history saw the rise of a new kind of syndicated radio personality: the shock jock. As Wikipedia defines it, there are two overlapping species of shock jock:

  1. The radio announcer who deliberately does something outrageous and shocking (to improve ratings).
  2. The political radio announcer who has an emotional outburst in response to a controversial government policy decision.

And who are Premiere’s (iHeartMedia’s) top earning syndicated talk show hosts?

  1. The Rush Limbaugh Show
  2. The Sean Hannity Show
  3. The Glenn Beck Program

Premiere’s top competitor is Westwood One. Who are Westwood One’s top syndicated hosts?

  1. The Mark Levin Show
  2. The Savage Nation

These top 5 syndicated talk shows represent the lion’s share of syndicated commercial talk radio. Of the five, only Limbaugh had a significant following prior to the 1996 Telecommunications Act. The other four are creations of the post-1996 radio consolidation phenomenon.

These sort of political talk shows would have been very difficult if not impossible to justify under the Fairness Doctrine that existed from the late 1940s until 1987. However, it’s important to note that it was not the removal of the Fairness Doctrine that led to the overnight explosion of right-wing shock commentators. The reason for the explosion is clear: these shows are products of vertical integration and economies of scale enabled by the 1996 Telecom Act. The typical pre-1996 local radio station in Average, USA would never be able to afford even one hour of Rush Limbaugh or Glenn Beck, but a giant conglomerate can actually save money by owning both the program and the distribution network and subsequently firing all the local employees of “Average 1310AM.”

And that is exactly what happened, in thousands of stations and communities all around the USA.

In a matter of a few years, this trend pushed high-volume shock-jock national-level syndicated radio right down into Average, USA. Gone were the local farming programs, the state politics talk shows, and Redbeard playing my demo. In came the right-wing talk radio movement, and the rest is history.

And that, my friends, is the direct line from the passage of the Telecommunications Act of 1996 to President Donald Trump.

Epilogue

This article would be remiss without its own version of the Fairness Doctrine. Because I think there’s another radio phenomenon that must be mentioned, and that is National Public Radio.

I’ll state here that NPR is overall a left-leaning organization, and has (had) a number of left / center-left talk shows such as Fresh Air, The Diane Rehm Show, The Takeaway, and Latino USA. However, only Fresh Air makes it onto the Top 20 syndicated talk show list.

NPR’s most successful programs are news programs: All Things Considered, Morning Edition, and Marketplace. These shows are also left /center left in focus, but as a rule do not offer editorial commentary.

In fact, only two progressive talk radio programs makes it to the Top 20 – Fresh Air (NPR), and the Thom Hartmann Program broadcast from the (commercial) Westwood One radio network.

As a result, the counterbalance of progressive, left-wing talk radio is dominated by an 800-lb gorilla called NPR, which crowds out other stations with its high-quality, listener-supported, and at least partially federally-subsidized broadcasting.

Now, while Terry Gross and Diane Rehm are surely left-of-center, there can be no comparison between the political slant of these sober NPR commentators and Michael Savage screaming “liberalism is a mental disorder” at the top of his lungs. There are no hyperpolarizing “shock jocks” on NPR stoking anger among their listeners with outbursts of rage. Nobody on NPR is “connecting the dots” Glenn Beck style to hypothesize various absurd yet certainly entertaining conspiracy theories. You will never hear an NPR personality refer to the Republican Party as a “terrorist network operating within our own borders.” And its most popular programs by far are the news shows – again, with next to zero commentary, and less-than-zero raving and pulling of hair.

So consider the polarizing effect of the top 5 syndicated radio programs:

  1. All Things Considered (NPR) – news
  2. Rush Limbaugh (Premiere) – conservative talk
  3. Morning Edition (NPR) – news
  4. Sean Hannity (Premiere) – conservative talk
  5. Marketplace (APM) – news

So NPR pulls the left towards the center, while commercial right-wing talk radio pulls the right to the right. Meanwhile, NPR’s large budget and high-quality commercial-free program sucks much of the air out of the room for any potential left-wing audience to support a more vitriolic, aggressive left-wing talk format (as though that would somehow help the country find balance).

More Reading:

Understanding the Rise of Talk Radio, Cambridge Core

The year that changed radio forever: 1996, Medialife Magazine

Why All The Talk-Radio Stars Are Conservative, Fortune

War of the Words: Political Talk Radio, the Fairness Doctrine, and Political Polarization in America, University of Maine

Dr. Chromelove (or, How I Learned To Stop Worrying And Love The Goog)

In Which We Pit the Lowly Chromebook Versus the Exalted Macbook Air

Let me get something out of the way: I am a straight-up Macintosh fanboi. After owning a couple of Macs in the early 1990s, I switched to PCs around 1997, largely because my software-development-centric job compelled me to live in a pro-PC world.

Then, around five years ago, I fatigued of maintaining Windows and the generally crappy hardware that runs it, and switched back to Mac. I bought a beast of a notebook: a 15″ MBP quad-core i7 with 16GB of RAM and a 1TB SSD. I wanted something strong enough to run Pro Tools natively, and to virtualize Windows and Linux machines without a stutter. And, five years later, that Macbook Pro is still a very current machine. It was expensive, but it was worth every penny, especially when amortized over five years.

The only trouble is: it’s big and heavy. As a machine to travel to and from the office, it’s fine, but these days I find myself increasingly traipsing all over the world, usually with everything I can carry in one backpack. Space and weight have become a premium, and I have a bad back to boot.

So I wanted to find a machine that would solve every computing need I have while I’m out on the road – basically, everything I use my Mac for except Pro Tools:

  • Must be small enough to fit in a day-bag and light as possible, definitely < 4 lbs
  • Must handle all my basic productivity & social needs (mail, docs, spreadsheets, twitter, dropbox, etc.)
  • Must be capable of running a development LAMP stack and typical development apps like git, ssh, etc.
  • Must be capable of light-duty audio editing (just editing, not a multitrack studio)
  • Must have good battery life (all-day unplugged usage)
  • Must have a good screen and keyboard
  • Must be inexpensive in case it’s lost, stolen, or damaged while travelling
  • Strong cloud support a big plus (see above)
  • Easy connectivity to phone a big plus

I admit that I came very close to knee-jerking and purchasing a Macbook Air.  The MBA would definitely meet all these needs but one: it’s awfully expensive to be a “beater” notebook.  After pricing them out and deciding that a new MBA would definitely not fit in my budget, I considered buying a used MBA.  But even a used MBA in decent shape and well-appointed costs around $600, which I still felt was more than I really wanted to spend.

Then I decided I should do some research on Chromebooks.  Like most people, I had fallen victim to the “Chromebooks are useless unless you’re always on the Internet” trope.  I think this might have been true at one time, but after doing some reading, I learned that the ChromeOS world had advanced considerably since I last learned about it.  In particular I learned that Google has made great strides in developing “disconnected” versions of its key apps – specifically docs and spreadsheets, the key things that one wants to edit while disconnected.

The other thing that really piqued my interest (yes, it’s piqued, not peaked) was the stunning realization that someone had figured out how to install Ubuntu on a Chromebook.  And folks, this isn’t Ubuntu running in a virtual machine, but Ubuntu running on bare metal – simultaneously side-by-side along with ChromeOS.  I was skeptical but intrigued: with Ubuntu as a fall-back, I could rest assured that anything that ChromeOS couldn’t handle, Ubuntu could.

“But Chromebooks are basically cheap pieces of crap,” was my next intuition.  Compared with Apple hardware, it’s true that most devices pale in comparison.  There’s no question that generally speaking, Apple makes the best hardware going, bar none.  But I don’t need perfect, I need good-enough and inexpensive.  And after a bit of research, I discovered an excellent machine for my needs, at least on paper: the Toshiba 13″ Chromebook 2 FHD.

Toshiba Chromebook 2 13" FHD

Toshiba Chromebook 2 13″ FHD

After living with this machine for a little over a week, I think I’m ready to start drawing some comparisons versus the 13″ Macbook Air.  Here’s how the two stack up.

Price

Let’s get the 800-lb gorilla out of the room.  No question who wins the first round.  At $330, the 13″ Toshiba is roughly 1/4 the price of a new 13″ MBA and 1/2 the price of a used MBA.  For the price of one new Macbook Air, you can buy a Chromebook for every member of the family.  ‘Nuff said.

Winner: Chromebook, by a country mile

Storage

The MBA comes with 128 GB of storage (256 GB is also available, but costs more) while the Chromebook comes with only 32 GB of local storage.  This is offset considerably by the fact that Google gave me 1TB of free Drive storage (100 GB is standard, but I already had that – your mileage may vary) and by adding a 128GB SDHC card as extra storage ($65 on Amazon) to bring total storage up to 160 GB.  Another mitigating factor is that ChromeOS minimizes use of local storage, while MacOS depends on it for everything, so ChromeOS presents a smaller footprint than MacOS in real-world use.

In the end I believe a 128GB Mac is no less limiting than a 32GB Chromebook for the applications I intend to use and it’s easy enough to bump up the Chromebook to 128GB if you need it.

Winner: tie

Keyboard & Touchpad

Here Apple is the clear winner, with a perfect-feeling backlit keyboard and a wonderful-to-use touchpad.  The Toshiba’s keyboard is perfectly usable and unproblematic but lacks the elegant feel of the MBA and is not backlit.   The touchpad is usable and sufficient but smaller and more plastic-feeling than the MBA.  It’s not a bad experience at all, but it’s hard to beat the best, and I think Apple offers the best keyboard / trackpad available.

Winner: MBA

Display

I hope you’re sitting comfortably, because the Toshiba’s display is absolutely spectacular.  How Toshiba managed to deliver a 13″ full-HD (1920×1080) display in a $330 machine is baffling, but they did, and it’s lovely.  Viewing angles are very good, colors are not perfect but whites are white, blacks are deep black, colors are bright and nicely saturated, and the resolution is astonishingly crisp.  The screen does not attract fingerprints and doesn’t have any coatings that cause pixellation or moire effects, though glare can be a problem if you’re backlit.

Winner: Chromebook

Sound

The Macbook Air has arguably the best speakers in an ultraportable notebook, so the competition is awfully stiff.  However Toshiba has partnered with Skullcandy to deliver a similar listening experience.  I still prefer the MBA because it’s a littler warmer, but I have to say that the audio from the Toshiba is very, very good for an ultraportable.

Winner: MBA, but just barely

Size and Weight

The Macbook looks smaller, but it isn’t – it’s just a design illusion.  In actuality the two machines are close enough in size and weight to be considered identical.  The MBA is a few hundredths of an inch wider and longer, the Toshiba is .06″ thicker.  The Toshiba weighs .01 lb less.

Winner: tie

Battery

The Macbook Air delivers better than 10 hours of real-world use, while the Toshiba falls short at roughly 8 hours.  8 hours meets my needs for “all-day unplugged use” but the winner is clearly Apple.

Winner: MBA

Inputs & Outputs

The two machines are very comparable.  Both offer 2 USB ports, audio out, power in, an SDHC slot, and a video output port.  In the case of Apple, the video is a potent Thunderbolt output, while the Toshiba offers a more basic – but more standard – HDMI output.  Unless you already use a Thunderbolt monitor, this means you’ll have to use a dongle adapter on the Macbook.  Both machines offer an HD webcam.  Both offer stereo mics, but the Toshiba’s are placed intelligently on the top of the display border (where the stereo image will actually correlate to the webcam), while Apple placed the mics in a poor location both on the left side of the machine.  Toshiba’s power supply is smaller, has a longer power cord, and is cheaper to replace; but the Mac offers the MagSafe connector.

Winner: tie

Horsepower

The Mac easily trounces the Chromebook in terms of sheer processing power.  However the only instance I have discovered where the Chromebook’s processing power is insufficient is multitasking while streaming HD video – which if you think about it, isn’t much of a shortcoming, as most people will pause the video when they leave it to perform other tasks.  It’s safe to say that if video editing ever becomes possible on a Chromebook, it will pale in comparison to the Macbook Air.  But for all other day to day tasks the Chromebook is more than sufficient for my usage.

Winner: MBA

Apps

Here, the Macbook trounces the Chromebook in terms of choice – at least on paper.  The Mac ecosystem offers a wide variety of apps to choose from, while the ChromeOS ecosystem is still a work in progress and definitely lacking in the multimedia department.

However, for my day-to-day use, I’m quickly realizing that I am missing very, very little.  I already live in the Google ecosystem (Chrome browser, Gmail, Drive, Docs, etc) which function as good or better on ChromeOS.  The key thing I lack is a top-notch image editor, but Google has promised to deliver a ChromeOS version of Photoshop in the near future, and in the meantime there’s Pixlr.  For text editing and software development, there’s Caret, an outstanding replacement for SublimeText on Mac. For light-duty music editing, I have to switch to Ubuntu (more on that later) but this gives me access to Audacity, which is a very full-featured editor.  I have yet to find a good video editor for Chromebook or Ubuntu, but this wasn’t part of my original requirements.

In short it’s pretty amazing to me that the ChromeOS ecosystem can even begin to compete with the Mac ecosystem with all of its advantages, particularly its 20+ year headstart.

Winner: MBA

LAMP-based Development

On the Mac, I use MAMP Pro as a turnkey LAMP server for web development.  It’s pretty hard to beat turnkey, and MAMP Pro is really easy to use and set up.  There does not currently exist a MAMP-like turnkey server solution for ChromeOS.

However, I was very surprised to discover how well Ubuntu runs alongside the ChromeOS.  It isn’t turnkey – you’re going to have to get your hands dirty – but the process is deceptively simple: you enable “developer mode” on your Chromebook, you install a script called crouton, you execute a few shell commands, and voila! Ubuntu is running right alongside ChromeOS – you literally switch back and forth between the two OSes by hitting CTRL-ALT-BACKARROW and CTRL-ALT-FORWARDARROW.  It’s super-slick, and opens up your Chromebook to the entire world of Linux.  I encountered zero issues with the process – no driver issues, no battery issues, nothing – though as usual I had to noodle around with Apache settings to get the environment working to my satisfaction.

While it’s true that turnkey beats DIY, if you’re a developer, you’re already accustomed to getting your hands dirty, and you’ll find nothing onerous about the process of installing Ubuntu alongside ChromeOS.  It’s weirdly easy and took me roughly 45 minutes, soup-to-nuts, which included the 20 minutes to download and install Ubuntu.

The cool part (for me) is that once you have a local development server set up and running, you can resume your development workflow entirely in ChromeOS, and forget completely that there is a Ubuntu server running alongside.  You can edit files on the local filesystem directly using the Caret editor.  SSH is provided inside ChromeOS using an extension called SecureShell so it’s quite easy to work with remote servers right inside the Chrome browser.  It all works a lot better than I would have ever guessed.

Winner: MBA + MAMP Pro

Backup / Restore

The idea behind an ultraportable is that if you lose or break it, it should be of minimal impact.  Here the Chromebook kicks serious ass.  Unlike the Mac, which relies on “old-school” backup & restore solutions like Time Machine, the Chromebook is literally a “throw it away and buy another” type machine.  All of your data is already backed up on Drive.  And all of your apps live as Chrome extensions.  So if you get a new Chromebook, you log in to your Google account for the first time, and magically, your device restores to exactly how your old device looked, without installing a single application.  Technically you can restore a MBA with a Time Machine backup, but seriously, it’s an entirely different and more perilous process.

Winner: Chromebook

Etc.

Apple has a good track record of keeping OS updates to a minimum, but this is starting to change as Apple keeps pushing Mac more and more into an iOS-like App Store model.  Increasingly there are more and more updates and downloads that require restarts, etc.  ChromeOS, by contrast, is more or less always up-to-date.  I really like how minimal the OS footprint is on the Chromebook and think that this bodes well for the device’s long-term usability.  I really admire how painless the install and update process is for apps.

The Mac is a metal-body machine, and while the metal can dent or bend, it’s undeniably more premium grade than the almost-identical-looking plastic used in the Toshiba.  Apple manages to brand its computers with an actually-cool illuminated logo, while the lid of the Toshiba sports ugly “Toshiba” and “Chromebook” branding.  Like all non-Apple computers, the Toshiba ships with a plethora of stupid, ugly stickers that have to be removed.

If you have an Android phone or a Chromecast dongle, you’ll love the seamless integration with ChromeOS.  Likewise, Apple offers similar integration with an iPhone and Apple TV, but those devices can cost considerably more than their Android counterparts.

Summary

Let’s face it: a $330 ChromeOS portable shouldn’t be able to beat a $1200 Macbook Air.  It’s a terribly unfair comparison.  The Mac has a superior processor, more storage, more memory, a better keyboard and trackpad, and of course a “full” OS and the 30-year-old Mac app ecosystem.  It’s like a bantamweight getting in the ring with Tyson.  Not a fair fight at all.

What’s surprising is just how well the Chromebook actually stands up in real-world use.  The display is better, the size and weight are identical, and for typical day-to-day chores, the Chromebook is just as usable as a Macbook Air.  Battery life isn’t quite as good but is still very good.  It meets my needs for a development machine just as well as a Macbook Air.  Software updates are easier.  There is essentially no need for backups, as all the data is backed up to Drive automagically, and the OS is practically disposable.  The only place I care about where I think the Chromebook falls short is multimedia editing.

So the verdict: if, like me, you’re a power user, you will probably not be happy with only a Chromebook as your sole device.  There are still areas like multimedia where the low-power processor and / or lack of robust applications will prevent you from ditching that Mac or PC.

But if you’re a “consumer grade” user who doesn’t edit music or video, or if you’re a power user who needs a cheap, lightweight, travel-ready portable, then you owe it to yourself to take a good hard look at Chromebook.  Especially if you’re already using the Google application suite.

Quick Review: Logitech Tablet Keyboard for Android

As I mentioned in my last post, perhaps the secret to phone blogging is to carry a keyboard.

Today I’m writing this blog post using my Logitech Tablet Keyboard for Android and the WordPress Android app, and it’s definitely a completely different experience.

The keyboard itself is about as small as it can be and still be considered “ergonomic.”  The keys and spacing are almost exactly the same proportions as my Macbook keyboard, and though the keypress travel is a little shallower, the overall typing experience is very good.

The keyboard travels in a case that doubles as a stand for your phone / tablet.  This makes it very easy to convert your device into something with a form factor very similar to a computer.

I have never gotten used to touching a screen instead of moving a mouse / trackball / touchpad, and I wonder if I ever will.  I find the experience of taking my hand from the keyboard and lifting it to the screen disruptive to data-entry (as well as leaving the inevitable fingerprints)  – but it works.

The keyboard works very well with the Android app.  I haven’t taken the time to learn the various shortcuts available with the keyboard, but the usual hotkeys like CTRL-C work as expected.

Of course, carrying a keyboard is not much better than just carrying another device, like a Macbook Air or Chromebook.  The keyboard is only a little smaller and lighter than a small computer.  One advantage comes to mind though: with the phone + keyboard solution, I always have the option of jotting down a quick blog post just using the phone sans keyboard, or jotting down some ideas in the app and fleshing the post out later with the keyboard.

Of course you can achieve a similar result using a phone + computer but this option saves a step.  And of course it’s easier to post photos taken on the phone directly from the phone itself, instead of having to transfer the content first to the computer.

Maybe the smartphone didn’t kill the blog after all.

Did the Smartphone Kill the Blog?

Used to be, I consumed all of my internet content on my computer. Any time I wanted to read an article, check my friends’ statuses, send an email, check the weather… it always meant a trip to the computer.

In that world, blogging came naturally. Here I was already at the computer, with its spacious and ergonomic keyboard inviting me to type my thoughts. It was almost irresistible.

I created a lot of content back in those days. I created the original prorec.com, participated in a group blog with my friends on cuzwesaidso.com, started and killed a humor site called skeptician.com, and of course blogged here on my personal blog.

But these days I no longer head to the computer when I want to interact with the internet. Now I reach for my phone.

The phone is a lovely way to consume internet content. It’s always with me. It’s 4G wireless. The form factor is convenient. I read novels on my phone.

But as a data entry device, it’s horrible. The keyboard is no match that of my Macbook and the screen is just too small for editing hundreds or thousands of words. And no phone app can compete with the page-layout power of a real computer.

And so I rarely blog anymore. It’s become inconvenient. When I want to say something I’m likely to jot down an email to my buddies (email being a forgiving text medium that does not mandate perfect grammar and page layout, where incomplete sentences and clumsy writing aren’t a showstopper). Or I’ll tweet or post a photo.

But blogging? This is my first blog post in years. I suspect it could well be my last. I’m actually writing this entry on my phone. And, I gotta tell you, it’s a royal pain in the ass, even though the WordPress app is mature and powerful and I’m hard-pressed to see how it could be improved.

Perhaps a Bluetooth keyboard could work. But then I’m practically carrying a computer again.

Time will tell if the phone has killed the blog. But from where I sit, things are looking gloomy in the blogosphere.

The Problem with EC2 Micro Instances

If you keep up with my blog you know that I really dig the EC2 micro instance running the Bitnami WordPress stack.  I’ve written about it before.  I’ve been hosting a few low-utilization web sites on mine for over a year now and generally speaking, it’s a great concept.

The problem is the occasional lockups.  Maybe once a month or so, the site dies, and the only information I can find that helps me to troubleshoot is that there has been prolonged periods of high CPU utilization whenever the crash occurs.

Well it turns out this is a problem with the Amazon micro instance.  On a micro instance, Amazon allows CPU bursting of up to 2 cores – but if CPU utilization stays high, it gets severely throttled.  And when that happens, sometimes, it crashes the server.

Today I decided to move up from a micro instance to a small instance.  I’m using the same disk image, so I’m running the same virtual machine.  But where the micro instance has up to 2 cores (with throttling) the small instance just has one core.

How much difference did the change make?  Turns out, a big difference.  Micro instance throttling is far more debilitating than I ever would have guessed.

Here’s typical CPU utilization for my server running as a “micro” instance:

As you can see, there is a bump every half hour as a cron job cranks up – and sometimes the CPU is maxed out for several minutes.  That almost certainly results in the web site becoming unavailable or at least very sluggish.

Here is the exact same virtual machine running as a “small” instance:

Wow – that’s an incredible difference: almost 10X the performance!

So while the micro instance is a great way to “get started” with EC2, the “small” instance provides far greater value – at 4X the price, it offers roughly 10X the performance.

Bring Social to the Blog, or Bring the Blog to Social?

I create content: I write, I shoot photos, and I create music. I also make the occasional video.

I want an online location where I can keep up with all my content, and my interaction with others.

My website – a WordPress blog I self-host – the one you’re reading now –  is the only place that truly gives me the control I want over my content. With my blog, I can

  • Create text posts with any length or formatting I like
  • Upload photos at any resolution with my choice of viewers
  • Upload music for download or insert Soundcloud or Bandcamp widgets
  • Interact with my guests using comments or Disqus
  • Integrate 3rd party content from other sites that offer feeds
  • Maintain 100% creative control over the look, feel, format, and style

The problem is – and it’s a biggie – is that the now-dinosaur-like “blog” format is completely isolated from social media. If I post something here on the blog, a few dozen people will see it. Nobody really reads my blog. But if I post something there, on Google+, a few hundred or even a thousand people might see it. It might even go viral, and millions of people might see it. On my blog, there is a next-to-zero chance that any content will go viral.

Of course, I can do what Guy Kawasaki does: publish on my blog, and link back to my blog from social media. But by failing to bring the content actually into the social media stream, I’m losing a lot of potential readers.

Or I can do what guys like Robert Scoble do: post everything everywhere. Scoble is ubiquitous. I don’t know how he can keep up with it all. In the memorable words of Mick Jagger, “I just don’t have that much jam.”

Alternatively, I can migrate to the available social tools instead. I can post my text diatribes over on Google+, but I have no control over the formatting and the layout is terrible for anything longer than a few paragraphs. I can also post my photos there and that works, mmm, OK, at best. I can’t post music, but I can share videos (a terrible situation) if I upload them to YouTube first. I can interact, which is probably the best feature. But I have zero creative control over the look and feel of my content. And I can’t integrate with 3rd party tools like Instagram, Twitter, Tripadvisor, or Hipster where I also create content.

So I end up with most of my most important content – my long blog posts and my music – hosted outside Google+.

What I really want – what someone needs to figure out – is how to have my cake and eat it too. Allow me to have my content on my blog – give me full creative control over it – but also allow me to interact on my blog through social media.

Alternatively, allow me to do everything I can do with my blog on a social media platform: customize it, post anything on it, and integrate anything into it.

The closest thing out there, actually, is Tumblr. Tumblr offers a social platform that is rich in content and customization and strong in supporting “viral multimedia.” The two problems Tumblr has are:

  1. Almost zero support for interaction – the only real interaction on Tumblr is sharing others’ posts, and
  2. Almost zero support for long text, since 99% of the content on Tumblr is visual. It just doesn’t work well for long posts, like this one.

Let’s figure this problem out together! I know I’m not alone. What are you doing to combat this problem?

Backing Up a Bitnami WordPress Stack on an AWS Micro Instance

If you follow me, you know that I am quite enamored with Amazon’s EC2.  Scalable, reliable, powerful, and cheap – it’s a revolution in computing.

The smallest and least expensive EC2 instance is the Micro instance.  It’s perfect for a light-duty web server: it has low memory and CPU capability but is capable of bursting to two processors, giving it responsiveness when you need it.  And Bitnami has the perfect partner for your Micro instance: a WordPress stack customized to live in the cramped space of the Micro instance.

What you get in the package is nice: a complete LAMP stack running on a simplified Ubuntu 10.04 server with WordPress preconfigured and ready to go.  Bitnami conveniently puts the entire stack in a single directory – you can zip that directory and drop it on another server and with very little effort you’re up and running again.

There’s plenty of info on the Bitnami site, so if you’re interested in setting it up, head over and check it out.

Where I was left a bit in the dark was… backups.

My first instinct was to use an S3 rsync tool to sync the Bitnami stack to S3.  There’s S3rsync, but that costs money, and I’m seriously committed to spending the smallest amount of money possible on my web server.  So I passed and instead settled on using S3cmd instead.

Using S3cmd, I was able to write a simple script that performs the following:

  1. It stops the Bitnami stack temporarily (this is acceptable in my application)
  2. It ZIPs the contents of the Bitnami folder to a ZIP file that uses the date as the filename (2011-07-11.zip)
  3. It copies the ZIP file to an S3 bucket
  4. It restarts the server

As a once-a-week backup it worked pretty well.  Backups were a little large, because they contained a full snapshot of the entire stack, but S3 storage is cheap, and it’s nice to have your entire stack in a single backup file.

However, occasionally, the ZIP process would crash the little Micro instance (HT to +Ben Tremblay for first noticing during a heated debate on his Google Plus page).  So I started looking for another solution, and realized there is a much more elegant and powerful option: automated EC2 snapshots.

Turns out there are a number of different ways to skin this cat.  I chose Eric Hammond’s ec2-consistent-snapshot script.  Turns out this was a good choice.

Since the Bitnami Ubuntu 10.04 server was a bare-bones install, a number of prerequisites were missing, notably PERL libraries (DBI and DBD) etc.  Fortunately all of the answers were already available in the comments section of Eric’s web page.  For me, all I needed to do was:

sudo apt-get install make
sudo PERL_MM_USE_DEFAULT=1 cpan Net::Amazon::EC2
sudo apt-get install libc6-dev
cpan -i DBI
cpan -i DBD::mysql

The first time I tried it, it worked.  One line of code – in about 0.8 seconds I had taken a snapshot of my disk.  In no time at all I had installed a CRON job to automatically snapshot my server.

EBS snapshots are always incremental (only the changes since the last snapshot are written to disk) and restore in a flash.  I’ve done a restore and it takes just a few seconds to reinstantiate a machine.  And the actual backup is absurdly gentle on the machine – the script runs in about a second.  Bang! Instant incremental backup.  It’s a miracle.

The script is designed to flush the database and freeze the file system so that the snapshot is performed in a “guaranteed consistent” state.  Unfortunately, to freeze the filesystem, you have to be running XFS, and the Bitnami machine uses While I agree that it is important to quiesce the database prior to snapshotting, I don’t know that it is required to flush the filesystem, since EBS volumes are supposedly “point in time consistent”.  Regardless, my web sites do so little writing to disk that it is inconceivable that my file system would be in an inconsistent state.

In short: *rave*.

Lessons in WordPress Conversions

Over the past few days I’ve been performing a pretty significant WordPress migration for a set of sites that I have been hosting.

The source is a set of individual WordPress sites running on an small Amazon EC2 Windows instance.  I migrated them to a multi-site installation running on a micro EC2 Linux instance.

Over the course of the conversion I learned a variety of lessons.

First, I learned that the WordPress multi-site (“network blog”) feature is still fairly half-baked.  You have to be prepared to get your hands pretty dirty if you want to make it work.

I also learned to really appreciate the Bitnami WordPress Stack AMI.  It allows you to spin up a fully-configured, ready to use Ubuntu LAMP / WP stack onto an EC2 micro instance with a minimum of fretting.

I will update this post with some details of the process for those interested.  In the meantime – success is mine!

iPad vs. Android vs. Windows 8 – Further Thoughts

Several people made some good points in regard to my article on iPad vs. Windows 8.

The most salient one, and the one I keep hearing, is the comparison to iPod.  It goes like this:

Yes, Apple only garnered a minority market share with the Macintosh and the iPhone.  But with the iPod, Apple was able to create and hold a substantial majority market share by establishing such a strong brand identity that “iPod” became synonymous with “portable MP3 player.”  Now, the iPad seems to be holding a majority market share as well by making itself synonymous with “tablet.” Therefore we should compare its trajectory to the iPod, not the iPhone or Macintosh.

The other salient argument goes like this:

Apple has a lock on the “high end” tablet market.  The iPad is better conceived, designed, and constructed than its Android or Windows counterparts.  Users really aren’t that interested in a marginally lower priced machine that offers lower design / build quality, and its hard to see how other manufacturers can “out-quality” the iPad, or if users really want a “better quality” tablet than the iPad.

I like argument #2 best.

The problem with argument #1 is that it ignores the market dynamics.  Macintoshes, iPhones, and iPads all have one thing in common: they are pieces of hardware running an operating system.  OK, technically, this is also true of the iPod, but only technically so. The OS of the iPod is more like an embedded firmware.

Apple’s minority market position with the iPhone and Macintosh stems from the fact that the OS and hardware are coupled.  Apple competes not just with Microsoft (for the OS), but with a gazillion other PC manufacturers (for the hardware).  It does this with phones as well, competing not just against Android but against every phone maker that produces an Android device.  So Apple can sell more PCs than any one PC maker, and more phones than any one Android manufacturer, but against the market as a whole it remains a minority player, albeit a large, powerful one.

Well, tablets are no different from phones and PCs: they are a piece of hardware running an OS, and it is a matter of time before tablet makers are able to closely copy the hardware designs of the iPad and the software advantages of iOS and release an Android tablet that competes well.  Will people buy it?  Yes.  Android has a majority market share in phones and a compelling tablet offering will appeal to that majority.

Windows is more of a wild card here.  In my previous post, I pointed to the fact that corporate IT departments will be much more likely to adopt a Windows 8 tablet than an iOS or Android tablet since it is an OS which they already support and understand.  I still think this is true.

Many people countered with the argument that with HTML5, it is irrelevant which device you support.  I agree but remain skeptical whether corporate IT departments will develop mission-critical wireless HTML5 applications. Corporate IT is happy with hard-wired web apps, but when it comes to running a mission-critical app over 3G or 4G networks, I think that is a far more risky proposition.

If I was asked to develop a mission critical application that ran wirelessly over a 3G or 4G network, I would almost certainly develop a “fat app” that replicated its data with the mother ship and which could run at 100% during network unavailability.  And, as a corporate IT developer, I would lean heavily on Windows as the platform of choice for developing that application, especially since the odds are very strong that the company already has a sizeable investment in technologies like .NET and MS SQL server.

If (and this is a very big “if”) Microsoft can deploy a compelling tablet version of Windows before the market has saturated, I think there is a good chance that they will capture significant corporate sales.  As we’ve seen in the past, inability to penetrate the corporate market was a serious impediment to Macintosh and, for a while, also the iPhone.  If Microsoft can execute, this is a strong opportunity for them to stay in the game.

Windows 8 vs. iPad: Advantage Microsoft?

There’s been a lot of buzz in the industry press recently about Windows 8, the new touch-centric Windows from Microsoft.

Much of the press has been understandably skeptical.  Apple definitely hit a home run with the iPad, building it on top of the iOS mobile touch interface.  Microsoft, instead, is building “up” from Windows by layering a new browser and application UI paradigm on top of existing Windows.  It’s easy to see where Microsoft might stumble, and hard to see how Windows 8 could possibly approach the seamless elegance of iOS.

And, the truth is, it probably won’t.

And, the truth is, it probably won’t matter.

Here’s why.

A History Lesson

The year is 1990.  I’m sitting at my workstation in Classroom 2000 on the University of Texas campus in front of two state of the art machines: a 386-powered IBM PS/2 running OS/2 and Windows 3.0 and a Motorola 68030-powered Mac IIci.

I’m teaching a class of IBM Systems Engineers (a glorified term for salespeople) who have come to learn about desktop computers. In this class we’re learning about PostScript, but really, the whole exercise is to throw Macintoshes in their face to scare the hell out of them.  And it works.  More than once, you hear an IBM employee mutter, “we can’t win.”

But they did.

In designing the Mac from the ground up as a windowed operating system, Apple has the clear technical advantage.  The machine is slick as hell: 32 bit architecture, peer-to-peer networking, 24 bit graphics, multitasking, and a beautiful, well-conceived UI.  Conversely, in PC-land, there’s Windows running on top of 16 bit DOS: a veritable Who’s Who of Blue Screens of Death and a nightmare of drivers and legacy text-based apps running around.

And yet, Apple failed to capitalize on their obvious competitive advantages, barely growing their market share over the next 10-15 years.

Why?  Because the largest purchasers of computers are corporations, and corporations purchased IBM / Microsoft as an extension of their current computing platform.  In part this was out of ignorance of what Macintosh could do, in part it was due to specific shortcomings of the Macintosh platform – but those aren’t the reasons corporations failed to embrace Macintosh. The real reason Macintosh never broke through the corporate barrier was because it never made sufficient sense to throw out all the legacy apps and start over again on a new hardware and software platform.

Office applications are not the engine of the productivity boom.  Word processors and spreadsheets don’t offer competitive advantage.  Factory automation, enterprise resource planning, sales force automation, customer and supplier portals – these are the expensive and risky custom-built applications that drive competitive advantage.  For that reason, you sometimes still see applications that remain GUI-less – you don’t screw with stuff that works – and oh by the way, throwing a nifty UI on an app like that can cost a fortune and offer negligible – even negative – payback.

So to synopsize our history lesson: Apple failed to sell to corporations because it never made good financial sense for those corporations to reinvent their line-of-business applications for a different platform.  Apple established itself as a great consumer brand and carved out niches in media production and desktop publishing – markets that were not tied to traditional corporate IT.  But because the corporate world used PCs, most individuals purchased PCs for the home, and Apple was unable to substantially grow its market share in spite of technical advantage and overall coolness.

Fast-Forward

We are now seeing the same history lesson repeat itself with the iOS-based iPad tablet going head-to-head against the next generation of Windows tablets.  In order to create the ultimate tablet experience, Apple has adopted iOS as the application platform for the iPad.  And while the iPad is a formidably slick and compelling machine, iOS is probably not the operating system of choice on which to develop mission critical corporate IT applications.

Enter Microsoft with Windows 8.  Will it be clunky?  Almost certainly.  Will it fray around the edges?  Yes.  Will there be jarring experiences where the user drops suddenly and unexpectedly into the old mouse-based paradigm?  Definitely.

But Microsoft can offer something that Apple can’t.  There are thousands, maybe millions of line-of-business applications deployed with technologies like C++, .NET, Access, and SQL Server.  Companies cannot and will not jettison them in order to rewrite for iOS.  But they will extend them to a Windows 8 tablet.

Microsoft’s decision to layer a touch interface on top of Windows is the only logical decision.  It’s the same decision they made in the late 1980s when they layered a GUI on top of DOS.  With Windows, Microsoft retained the established customer base while expanding their market reach by extending, rather than reinventing their operating system.  The business advantage outweighed the technical disadvantage. With Windows 8, they can do it again.

I think the decision is brilliant.

The Proof is in the Pudding

Now, we simply have to wait and see if Microsoft can deliver.  That may be a stretch.  Microsoft has a “hit-miss-miss” record with Windows.  With Windows, it was not until Windows 95 that Microsoft pulled within reach of Apple, and only Windows XP was solid enough to truly compete technically.  Microsoft cannot wait 10-15 years like it did with Windows to catch up.

I think that it’s fair to guess that Windows 8 will not be an iPad-killer, no matter how great it is.  Fortunately, it doesn’t have to be an iPad-killer.  It just has to establish a baseline of functionality and provide a suitable application development platform. Corporations will develop impressive line-of-business applications for the touch interface – specifically field-worker automation applications – if the platform is robust.

If compelling touch-based business applications can be deployed on Windows 8, it will have done its job: it will have convinced corporations that Windows can meet their needs for a touch-tablet computer, and Apple will be stymied in their attempt to finally break the barrier keeping them out of corporate America.

PS: I am writing this on my brand new, and very sweet, MacBook Pro.

To the Dark Side

I have finally done it.  Please forgive me, Mom.

That’s right.  I bought a Mac: the new quad I7 15.4″ MacBook Pro.

And what’s more: I’m switching to Pro Tools 9 from Sonar.

I’ve had a long love-affair with my Dell Mini 9 Hackintosh.  The little sucker went with me everywhere.  I used it as my travel computer.  I used it in my keyboard rig.  Small to the point of silly, and relatively inexpensive, it was the perfect travel machine.

Well, it got stolen in Mexico.

A confluence of needs had me wanting a new MacBook Pro anyway, and what better time than after the sudden loss of the Hackintosh.  So I have finally taken the plunge for real.

More to come…

Sony: Security Fail Redux

By now, everyone knows that Sony’s Playstation Network got hacked earlier this year.  It’s a big mistake that shouldn’t have been made, but we all make mistakes.  The key to Sony’s viability as a player in the online world is that it be able to learn from its mistake.

Today, Sony is reporting a new intrusion:

In a warning to users issued on Thursday, So-net said an intruder tried 10,000 times to access the provider’s “So-net” point service […] from the same IP address.

There is absolutely no reason why any online service should allow an intruder to attempt 10 unsuccessful login attempts from the same address, much less 10,000.  This represents a complete failure to grasp the fundamentals of security, and any reasonable observer would have to conclude that Sony is completely security-blind and totally naive.  You can expect many, many more stories like this to emerge unless the company adopts a complete reinvention of its online presence.

Hackintosh vs iPad

A lot of people come into the coffee shop with iPads and are intrigued by my Hackintosh…. As a portable, I’ll take my Hackintosh over an iPad for most everything I do, with some caveats.

First off, it cost about $500 as configured (includes the cost of a Snow Leopard install disc) – 2 GB RAM, 64GB SSD (soon to be 2x for 128 GB total), camera, Wifi, bluetooth.  That’s considerably cheaper than a comparable iPad (actually there is no comparable iPad, but if there were it would likely cost close to $1K).  It can run almost any Mac, Windows, or Linux app (I *love* the Ubuntu 10.10 netbook edition) – “almost” because it won’t run apps that exceed its screen size without connecting to an external monitor.  With 3 USB ports and an SD slot, it can connect to a KVM so I can use it as a desktop Mac – it’s about as powerful as a Mac mini.  And it runs Snow Leopard *very* well – I never have lockups, everything works – I even use it onstage for my software synths, which usually are the litmus test for stability.

The keyboard is cramped and requires a slight relearning curve but I am confident I can out-type compared to the on-screen iPad keyboard – if you have an iPad keyboard case, however, you’ll win.  The battery life is less but still impressive – ~5 hrs of heavy use, 6+ hrs of light use, 48+ hrs of sleep, depending on monitor brightness.  It’s about as thick and heavy as an iPad if you have an iPad keyboard case.  It’s also tough.  Mine has been dropped on concrete many times thanks to clumsy drummers and shows almost no signs of wear.

I have the Mini 9, while Vanessa has the 10v.  The 10v has a slightly larger screen and keyboard (the keyboard is a lot less cramped) and can accept a standard 2.5″ hard drive so you can up it to 500 GB or more, or use a big SSD (if you can afford it).  The 10v, by all accounts, is as good as a Mini 9 as a Hackintosh.

Of course there is a level of tweakery required to get the thing running, but it’s actually fairly easy to do, as there are really good guides and helper apps available now.  In short, you copy a file onto one small USB drive, you copy Snow Leopard install onto another, larger drive, and then boot up from the USBs.  Two or three clicks and you’re installed to 10.6.  A couple of tweaks and you’re ready for 10.6.7.  The only thing that doesn’t work at that point is the internal mic, which requires a hack to enable (USB headset mics work fine).  The hack took me about 15 minutes to complete.

Next step: install Snow Leopard on my desktop.

Samsung Epic 4G AMOLED vs. HTC EVO 4G LCD Screen First Impressions

I had a chance today to do a hands-on comparison between the two major 4G contenders from Sprint: the HTC EVO and the brand-new Samsung Epic.  Let me cut to the chase: I found the Epic’s AMOLED display to be unusable.

The Epic’s brilliant, colorful AMOLED leaps out at you with brilliant, oversaturated colors that make your existing phone look black-and-white, while the EVO’s LCD display humbly displays a neutral, normalized color palette.

On first impression, the Epic grabs your attention. “WOW, look at THAT!”  Images and videos just POP to life in a way you’ve never seen before.

But, on second impression, the AMOLED display comes up short.

The first thing you’ll notice is that whites are blue, not flat white.  Oversaturated colors pop at first, but eventually you start to see them as actually oversaturated.  These are minor problems, but problems nonetheless.

The major problem – the dealbreaker – is text.  The AMOLED display is incapable of rendering smooth font edges.  Instead of a nicely-blurred edge, individual pixels appear, resulting in difficult-to-read text.  The smaller the font, the more obvious the problem, as the eyes focus on tiny font details that turn into individual pixels.

It appears to me that the pixels in an AMOLED display are each surrounded by a tiny black border.  This is unnoticeable when displaying photos or videos, but black-on-white text – or white-on-black text – clearly shows the shortcoming.

I was hopeful that the Epic could be my new phone, but it and its Galaxy S brethren are now crossed off my list.  AMOLED fail.

Ubuntu’s Multimedia Challenge

Having used Ubuntu exclusively now for a few weeks, I am a true believer.  It’s just a great operating system, with great looks, speed, and power.  And it does almost everything a modern platform should.

Until you need multimedia power.  Specifically, photos, music, and video.

Ubuntu’s photo managers, like F-Spot, Shotwell, and the like are all hopelessly simple.  They tend to choke on large collections.  The first time I started F-Spot, I pointed it at the folder containing my photos and watched it die a miserable death.  They don’t organize well by metadata: sort by date, or sort by folder.  That’s it.  They either don’t connect to photo sharing websites, or connect only very clumsily.  The editing capability is terribly lacking.

Here, the paradigm is Picasa.  Picasa handles importing, organizing, simple editing, and uploading to a sharing site with absolute aplomb.  It is super fast, handles huge collections with ease, is very easy to use, and is surprisingly powerful.

Picasa is available as a download from Google (it isn’t available through the Ubuntu software manager) and only runs as a WINE app.  It’s stuck on version 3.0 and there is no sign of a future release, even though Google remains staunchly pro-Linux.  Nevertheless, Picasa 3.0 running under WINE is far better than any native Linux alternative.

In order for Ubuntu to succeed in the mainstream, it needs native Picasa, or a sufficiently robust alternative.

Next up is music.  The default player, Rhythmbox, is woefully inadequate.  I was sorely put to the test when I tried to perform the most basic of music management tasks: create a playlist and put it on my iPod.  I created the playlist easily enough, but found there was no way to copy it to the iPod.  FAIL.  Undaunted, I copied the tracks from the playlist to the iPod, then created a new playlist on the iPod and dragged the tracks into it.  This almost worked, except that when I tried to order the tracks to my liking, I found that they remained in the original order on my iPod.

And then when I tried to rename the playlist, Rhythmbox crashed.

Here there is One App to Rule Them All.  iTunes?  Ha!  I scoff at the suggestion.  Nay, not iTunes.

MediaMonkey.  Far and away the best music management app, ever.  By a longshot.

If all you do is buy songs from iTunes and play them in iTunes and on your iPod, then iTunes might be good enough for you.  And it would be better than any of the native Linux alternatives.  Of course, iTunes on Linux ain’t happening.

But if you have a complex mess of MP3s, FLACs, stuff your friend loaned you on a flash drive, songs you ripped from your CDs, and other serious organizational tasks, MediaMonkey’s database-driven design puts everything else to shame.

In order for Ubuntu to succeed as a mainstream OS, it needs music management software on par with MediaMonkey.

I’ve already mentioned how badly I hit my head on video editing.  I don’t expect Ubuntu to ship with a free copy of Vegas.  But the existing video apps are super weak.  Let’s pick one and run it over the NLE goal line, OK?

At least Ubuntu runs VirtualBox well.  I will need it for a few Windows apps that I’m not going to be able to leave behind.

At least, not yet.

Google Voice FAIL

Recent message transcription from Google Voice:

I’d love to be back on my staff. If the i’m on the put. Could be a me telephone and I’m Mesa. Maybe Clinton you’re leaving was on the within these things. But if it works isn’t gonna get there, and we had a second two passes. Bye.

I love it when a good plan comes together.

Ubuntu: Video Editing FAIL

Well this time I really have bumped my head hard on Ubuntu.

Video editing apps are simply a shambles.  The default editor, PiTiVi (which apparently is Ubuntu-speak for “PiTiFul”) is terrible.  Editing is a joke.  It would take all morning to list my complaints, which isn’t worth my time.  Just suffice to say, it sucks.

I installed a half-dozen competing editors and found that the only app that comes close to being usable is Kdenlive, which is still pretty hard to use.

This is all to be expected, and is why I kept my Windows machine for multimedia editing.  But if you are planning a switch to Ubuntu, and expect to get any multimedia work done on it, watch out.  The apps are very, very weak at this time.

Ubuntu: PowerWIN

Update to previous post: I downloaded and installed the Linux ATI video driver for my Lenovo T400.  Immediately my battery life doubled.  I am now seeing battery life roughly comparable to Windows 7 – approximately three hours.  Additionally, the heat generated by the computer is much lower than with the old driver.  Apparently, the video GPU was just cranked up to 100% all the time.

If you use Ubuntu on a notebook, and are suffering poor battery life, a good place to start is with your drivers.  Who would have guessed a video driver would make THAT great a difference?

Ubuntu: PowerFAIL

I’ve been living in a completely Ubuntu world for over a week now, and am still loving the experience overall.  However, one thing has definitely given me pause: Ubuntu clearly consumes more power on my Lenovo T400 than Windows 7 ever did.

Typical battery life for me in Windows 7 was a solid three hours.  With Ubuntu, I am getting no more than 90 minutes.  That’s about a 50% reduction in battery life – similar to the results posted a couple of months ago by Phoronix.

I love Ubuntu, even if I can’t improve the battery life, so I’ll try some tweaks to see if I can improve the results.

By comparison, PowerWIN is clearly the Dell Mini9 Hackintosh.  No spinning disk, no fan, 9″ monitor, and MacOS gives it a typical battery life of well over four hours.  I’ve seen it run for over five hours if the monitor is dimmed.

Now I need to close this post.  I have only 10% battery left and my PC is about to die.

Ubuntu: The Time Has Come

I’ve messed around with the Ubuntu operating system off and on for several years.

It’s an important concept: a Linux distro focused on simplicity, usability, and mass appeal.  Unix has been the “next big thing that never happened” since the 1970s.  Linux was supposed to be the killer implementation, and Red Hat and other companies did a good job at creating compelling server-side distros, but no Linux distributions have ever been sufficiently end-user-friendly to displace Windows and Mac on the desktop.  For years it’s been next to impossible to find drivers for the myriad of hardware that’s required on the desktop for things like cameras, scanners, joysticks, etc.. And applications for Linux – while available – often lack the polish of Windows or Mac apps, and typically must be compiled for the user’s target operating system… needless to say, this is not the sort of process for Joe User.  Problem is, the typical Linux user thinks this process is Just Fine Thanks due to a hundred technical reasons that nobody cares about, so for years, change has come very slowly.

Ubuntu and its benefactor Canonical have been working diligently to make a Linux distro that’s truly user-friendly – something that could truly compete in the free market against Windows and Mac.  I installed Ubuntu for the first time about three years ago, and found it to be interesting – even compelling in many ways – but like always, the drivers and lack of software prevented me from living with it.  I used MS Office apps, and the replacement, OpenOffice, was too underpowered for me, and drivers for much of my hardware were unavailable or inadequate.

Things have been changing, and I have now switched to Ubuntu on all of my computers except my Hackintosh and the DAW at Pleasantry Lane.  While the software is still less-than-enough, I have found that over the past few years my dependence on MS Office has waned significantly due to two factors:

  • MS Office has failed to advance in usefulness
  • Cloud apps like Google Docs have become worthy alternatives

For photo management, nothing touches Picasa, which works better on Windows and Mac than on Linux.  But it does run on Linux (are you listening, Google?).  For music management, nothing touches Media Monkey, but I can use Ubuntu’s default player “Rhythmbox” well enough.  Oracle offers a strong, free, VMWare-compatible virtual host called VirtualBoxOSE that has helped ease my Windows separation anxiety: if something comes up that requires Windows, well, I still have Windows.

I used to do primarily Windows based development.  But, increasingly, I’ve come to see the light on using VMs for development since they make it so easy to have clean, isolated development environments that are easy to push back and forth to production VMs, so running a dev environment in a virtual machine isn’t really a problem if I want to do Windows development.  And besides, increasingly, I’ve been itching to do more *nixy development, like Python, Ruby, MySQL, and CouchDB which I can do natively on this machine (though I’ll probably use a VM for them as well).

Other, bundled software is pretty nice.  There is OpenOffice, which has matured significantly.  There’s Gwibber (a social-media client) and Empathy (a chat client).  Remote Desktop (for Linux boxes) and Terminal Server clients are both built-in.  I spend a lot of time in text editors – and really like the Gedit editor a lot, enough to turn my back willingly on EditPad Pro.  There’s the Evolution mail and calendar client which, like Outlook, I doubt I’ll ever need, since Gmail rocks.

Drivers seem stable, solid, and plentiful.  The only device on any of my computers that doesn’t work in Ubuntu 10.04 is the fingerprint reader on my Lenovo notebook.  Other stuff works surprisingly well.  Integrated camera?  Just works.  HP all-in-one network printer/scanner/fax?  Just works.  Things that used to get Linux boxes really confounded (like Sleep mode) work great now.  Heck, even Bluetooth works.

Other things are such a pleasure.  Boot up time into Ubuntu, once the computer has left the startup screen, is literally one second on my Lenovo (which has an SSD).  Networked computers all see each other nicely and play well together.  The UI is slick and powerful.  Fonts render better in Firefox than in any Windows or Mac browser, making web surfing more pleasurable.  The installation process is super-painless – easier than installing Windows 7 or Snow Leopard.  Ubuntu One is nifty.  The Ubuntu Software Center maintains a convenient list of easily-installable, compatible, and free apps that automatically compile and install in seconds.

It’s fast.  It doesn’t crash.  And it’s virtually virus-proof.

Did I mention it’s free?

Hackintoshed!!

So I’m writing this with my new $350 Hackintosh netbook.

I learned about Hackintoshing a few months ago, and was intrigued. I love the Mac OS, but there are things about Apple that seriously bother me. iTunes? Can’t stand it. The closed nature of the Mac platform? Not so much. You have to buy a $2600 Mac Pro just to get an expandable computer. And the prices generally. Lordy.

I tried using a Mac as my main computer for a few days and gave up. It is a lovely operating system and a MacBook Pro is a very nice laptop, but the cost – about 2X of the (more powerful) Lenovo – and inability to live “natively” on it (being a Windows guy in Real Life) caused me to give up on it.

But I liked the Mac experience. OSX is a terrific operating system. It’s so clean. It’s delightfully Unixy. I’ve owned several Macs back in the day, and it has always bothered me when I go to someone’s Mac and don’t remember how to use it.

And then there’s these nifty netbooks everyone is running around with now. The form factor is intriguing. Tiny, lightweight, cheap, and powerful enough for most day-to-day tasks.

I finally saw one in real life at Stack Overflow DevDays, and was convinced. It was a Dell Mini 9 – universally recognized as the easiest, most compatible Hackintosh platform (apparently the 10v is also a very good Hackintosh). It essentially runs OSX natively, right out of the box, and supports it almost completely. Dell no longer makes the Mini 9, but you can pick up a refurb unit cheap. I got mine for $220, with free shipping. I already had a copy of Snow Leopard from my aborted attempt at Mac Ownership. I dropped a 64 GB RunCore SSD into it and set about installing OSX on it.

It was completely painless. I followed these simple instructions and in about an hour had the thing up and running. The only thing that didn’t work correctly was Sleep and Hibernate (the computer would hang when you tried to put it to sleep) which was resolved by installing the free SmartSleep utility from Apple which fixed the Sleep but not the Hibernate problem.

The main complaint – common to any computer with this tiny form factor – is the usability of the keyboard. It is cramped, and the apostrophe / quote key is in a terrible location. However it is usable – I am able to type at about 80% of the rate I achieve on my Lenovo (which may have the perfect keyboard). Productive, but not enjoyable. If you are a serious touch-typist then you will have more problems. I am sort of a four-fingered typist so I think that I am probably more adaptable to this keyboard.

I have read a few people who say that they can type better on an iPhone than the keyboard on a Mini 9. That is balderdash. The Mini 9 does take some getting used to, but it’s a lot faster than typing with one or two fingers. Some people have swapped the keyboard for the Euro / US version which trades smaller keys for a better key layout. I think it comes down to one thing: if you’re writing code, or a novel, or any other large text that makes heavy use of apostrophes and / or quotes, then the Mini 9 is going to be pretty frustrating. Otherwise, you should be able to make it work for you.

On the good side, the screen is small (1024×600) but lovely. It is bright and white and sharp and very pleasant to look at. And with 2 GB of RAM and a 64 GB SSD the computer is quite fast. Totally inadequate for serious CPU work like A/V, but for 90% of what I use a computer for, it’s just great. It will play videos nicely, too – and they look terrific on the LCD. I/Os are good – ethernet, VGA, 3 USB, audio, and an SD slot. It is the perfect travel companion.

It is also silent – has no moving parts at all – and cool. The bottom warms up a little but doesn’t ever get anywhere near “hot”.

Dell sells the Mini 9 with Ubuntu. Ubuntu is a great little operating system, and is nicely configured to be netbook-friendly on the Mini 9 – but it doesn’t compare to OSX. OSX may be the perfect netbook OS. I haven’t yet installed iLife on this computer, but I can see it coming.

And finally, there’s the cool factor. You’re running the best consumer OS money can buy, on a small, quick, nifty, and very cheap piece of hardware. It’s Mac-cool without the Mac-cost.

If you want a Mac netbook, you have a choice. You can wait for Apple to make one, or you can just Hackintosh a Dell Mini 9 or 10v.

Semantic Blogging Redux

A while back I posted something about WordPress’ taxonomy model.  At the time I thought it was clever and thought we should use something like it for the DotNetNuke Blog module.  Now, I’m less enamored with it.

Here’s why.

To recap, have a look at this database diagram:

The seeming coolness stemmed from the decision to make “terms” unique, regardless of their use, and to build various taxonomies from them using the wp_11_term_taxonomy structure.  So let’s say you have the term “point-and-shoot”, and you use that as both a tag and a category.  “Point-and-shoot” exists once in the wp_11_terms table and twice in the wp_11_term_taxonomy table – each entry indicating the term’s inclusion in two different structures.  This seems useful because the system “understands” that the tag “point-and-shoot” and the category “point-and-shoot” both mean the same thing.

But is that always a safe assumption?

Consider the case of a photo blog, where the writer is posting photos and writing a little about each.  This photographer has a professional studio, and also shoots portraits in public locations, as well as impromptu shots at parties.

This photographer has set up a category structure indicating the situation in which the photo was taken “Studio/Location/Point-and-Shoot” (meaning, an impromptu photograph) and another structure or set of tags indicating what sort of camera was used “Point-and-Shoot”, as opposed to “DSLR”.

Same term.  Two completely different meanings.  Use that term as a search filter and you will get two sets of results, possibly mutually exclusive.

And so – to truly be “semantic”, the term cannot exist independently of its etymology (as expressed in the category hierarchy) as WordPress attempts to implement.

The Angel of Death

So I decided last Wednesday to finally retire my aging desktop.  It died a peaceful, natural death.

The rest were not so lucky….

For some time I have wanted to replace my aging desktop + laptop combo with a single portable notebook that could serve double duty as an easy traveller as well as a desktop replacement.  I finally found a machine that met my needs well: the Dell Studio XPS 1340.
xps131_01The Studio XPS 1340 is small, weighing about 5.5 pounds, which makes it nicely portable.  And, in an affordable configuration offered at Best Buy, it sports a 2.4 GHz Core 2 Duo (1066 MHz FSB), 4 GB RAM, and a 500 GB 7200 RPM hard disk – all of which make it a reasonably strong performer.  It has a nice backlit keyboard, strong metal hinges, a tasteful design with leather accents, and other appointments that seem well thought-out.  And, at $899 from Best Buy, the package was irresistable.  This was the machine for me.

I disassembled my desktop to harvest the data drive out of it and set about getting it ready to eBay, then headed off to pick up my new computer at Best Buy.  Like all new PCs I purchase, my first step when I got it home was to delete the drive partitions and set up Windows sans bloatware.  By the time I had installed Vista (and updates), and Office 2007 (and updates), and Visual Studio 2008 (and updates), and SQL Server (and updates) and other apps, most of my day was gone.  Late that night I inserted a CD-ROM to hear an angry clicking sound from the slot-loaded drive, followed by a diminishing whirring noise.  Yep, the optical drive had failed.

Next day dawned bright and early, as I disappointedly headed back to Best Buy to return the dead machine.  This time I was buttonhooked by the Apple salesman – a slick, knowledgeable gentleman named Bruce.  Bruce wasted no time talking up the MacBook Pro and bashing Dell and Microsoft.  I’ve been saying for years that my next machine might well be a Mac.  It’s no secret that Apple’s building the best hardware out there, and that OSX is the best desktop operating system yet built.  It’s also not lost on me that the virtual machines available to run Windows apps have become robust and powerful, and are strong performers.  After almost an hour of brainwashing from Bruce – as well as the lure of the seductive aluminum lovelies on display – I dropped an additional $1600 and walked out of the store with a MacBook Pro and a copy of VMware Fusion.

It wasn’t 30 minutes before I was truly in love with the Mac (and Leopard).  It does so many things so well.  But 12 hours later – after installing Snow Leopard, Fusion (and updates), Vista (and updates), Office (and updates), et al – I was faced with the ugly truth that however slick and powerful the 2.8 GHz MacBook Pro might be, running Microsoft apps in VMware is still a clumsy and slow way to run a development environment. I’m sure that the MacBook Pro will outrun any Windows notebook when dual-booting Vista natively, but running Vista in a VM is definitely not as fast as running it natively on the 2.4 GHz Dell.  Sorry guys, as a Microsoft development environment, it isn’t as good.  If I could live in MacWorld and rarely use the Windows apps, it would be worth it.  It’s awesome.  But if you live in the Windows world (as I do), the MacBook ends up being a very, very expensive Windows machine.

So, next day.  Back the MacBook went.  I get working on the replacement Dell 1340.  It’s not as slick as the MacBook Pro but it’s pretty sweet.  And I have $1600 back in my pocket.

12 hours later, the thing up and died.  This time, the motherboard.

Bummer.  Another day lost.  That’s three days now that have been spent setting up (and returning) computers.

So, third time’s a charm, right?  Wrong.  That was the last Dell 1340 in all of North Texas.  Maybe all of Texas.

I’m not a deeply religious man, but sometimes I get the idea that God is sending me a clear sign.  Maybe I’m not supposed to have a new computer right this moment.  OK, I get that.

So I decide to go ahead with Plan A (getting rid of the old desktop) but figure I can drop an extra gig of RAM in the old notebook and make it last another year or so.  Plus, I got my hands on a Win7 install and from all i can tell, Win7 outperforms Vista.

So, I fdisk that puppy and install Win7 on it.  Late that night, as the Win7 install is wrapping up, the installer throws errors.  The computer reboots.  CHKDSK is running.  CHKDSK is not happy.  Yep, the hard drive has failed.  Another one bites the dust.  Three down.  Four including the original desktop that died a natural death.

OK.  Now I’m considering a career in home and garden, or perhaps food preparation.

I shake off these thoughts and resolutely turn my mind to positive thinking.  Life hands you lemons?  Make tarte au citron, that’s what I always say.

So I decide to drop the coin for a solid state drive for the notebook.  $300 later, and the notebook is now equipped with the 128 GB SSD from Crucial.  250 MB/sec read, 200 MB/sec write.  Holy Mother of God is this thing fast.  I’ll save that for my next blog post, but I was blown away by the speed.

So Sunday I spent the day reloading Win7 (and updates), installing apps (and updates) — you know the drill.  The computer was super fast now.  I rearranged my desk to be notebook-friendly.  Life is looking good.

Late Sunday night.  No, make that Monday morning.  1:30 AM.  I finish a last set of updates.  The computer reboots.

Why is CHKDSK running?

I think the SSD may be bad…

… I think I’m going to cry now.

True Believer

Joel Spolsky is a big fan of SSDs.  Even when he’s wrong, I like reading his stuff.  But when he’s right, he’s oh-so-right.

So, if you’re keeping up, I recently installed an SSD in my laptop.  This is the summary.

SWEET JESUS MY COMPUTER IS ON METH!

Now, this is not a serious benchmark review.  I”m just calling ’em like I see ’em here.  But this old notebook is now the fastest computer I’ve ever used.

Not really.  If I had to throw a big video rendering project, or a big compilation project at it, it would feel pretty slow.  It only has a Core Duo 1.7 GHz processor and 1 GB of RAM.  The ATI Mobility 1400 display adapter doesn’t completely suck for a notebook, but it’s no award winner either.

In fact the only fast piece of hardware in the whole system is drive.  This is one of – if not THE – fastest SSDs available: with 250 MB/sec reads and 200 MB/sec writes, this 128 GB model from Crucial is really, really damned fast.  About twice as fast as a pair of 10,000 RPM Raptors in RAID0.  In the bidness, we call that “crazy fast”.

So this three year old computer simply feels like the fastest machine I’ve ever used.  And I own a really fast machine – a quad processor box with 4 GB of RAM and a nice display adapter I built for my studio.  The laptop feels much, much snappier.

Click on an app, and it just opens.  Boom.  Open a file – no waits.  And the impact on the swapfile can’t be underestimated, either.  When you’re running a lot of apps – even when you have lots of RAM – Windows will use the swapfile heavily.  With this uber-fast drive, you almost never notice swapfile activity.  It just happens too fast.

Which goes to show you – most of the time we’re waiting on our PCs, we’re really waiting on our hard disk.  Don’t believe me?  Throw an SSD in your old PC and get back to me on that.  Like me and Joel, you’ll be a true believer.

Where Facebook Will Fail

Robert Scoble has an interesting (if dated) post called “Why Facebook has never listened and why it definitely won’t start now“.  It’s a good article with many great points.

He writes:

Zuckerberg is a real leader because he doesn’t care what anyone thinks. He’s going to do what he thinks is best for his business. I wish Silicon Valley had more like him.

I get his point.  To run a business effectively, it’s not so important to know what your customer thinks as it is to anticipate what they’re going to think.  And it’s clear that Zuckerberg has a vision and is running with it.

Here’s where I disagree with that vision.

Scoble (and Zuckerberg) have this vision of Facebook as a way to target advertising and make billions. It may well be possible to make billions targeting ads to Facebook users.

There is, however, a mistaken belief that there are high barriers to exit for Facebook users.  The argument goes: when all your friends are on Facebook, you’ll never leave Facebook.

Remember MySpace?

History shows that people will leave Facebook the moment – the instant – something appreciably cooler comes along.  The early adopters will try it, the connectors will promote it, and then the great masses will rush in droves to be a part of it.

It isn’t too technically difficult to create a new social networking site.  Plenty of people are doing it.  None of them are sufficiently better than Facebook to steal the market share.  Yet.

Facebook, however, now has a very, very serious liability: all those pesky ads.  Nothing is less cool in a social networking environment than ads.  I disagree firmly with Scoble on this one.  It’s like I’m having a personal conversation with three of my close friends at a party, and this annoying guy keeps poking his head in.  “Did you say you were buying a car?  You need to test drive the new Mazda 6!  It’s Car and Driver’s Car of the Year!”  Then moments later “You went to the dentist?  Did you know that the Oral B is preferred 3 to 1 by dentists for removing plaque?”

I’m trying to have a conversation here, buddy.  It won’t be long before I punch that asshole in the face.

When Facebook is making billions on ads, someone less interested in buying airplanes, yachts and islands can come along with a new social networking site that has no ads or limited ads.  An alternative that will be instantly cooler.

Once it starts, Facebook will be at a terrible disadvantage.  Once you’re making billions, it’s very, very hard to compete against a competitor who is willing to settle for mere hundreds-of-millions.  Their knee-jerk response will be to increase the ad penetration to keep up the revenue stream.

It will be too late for them when they realize that their revenue model is exactly what is turning off their customers.

Math, No. Set Theory, Yes.

Jeff Atwood couldn’t be more right when he says

I have not found in practice that programmers need to be mathematically inclined to become great software developers. Quite the opposite, in fact. This does depend heavily on what kind of code you’re writing, but the vast bulk of code that I’ve seen consists mostly of the “balancing your checkbook” sort of math, nothing remotely like what you’d find in the average college calculus textbook, even.

Exactly.  Programming – especially GUI-based web-centric software development of the sort that most people are up to these days – is much more a “right-brained” than “left-brained” activity.

Question for the group: is logic – especially set theory – more right- or left-brained?  Modern software development may not be highly mathematical, but it often requires heavy database design and optimization, where a strong aptitude in set theory is a big plus.

Bugs? Features? Defects? Who Cares? I Do.

There was a bit of debate on the issue of bugs versus features I raised in Coding Horror-ibly.  I tried my best to keep it concise, but apparently, didn’t explain myself well.  It seems worthwhile to clarify, Mythbusters-style.

Myth 1: All defects are bugs

This Myth is FALSE.  However, the converse, all bugs are defects, is TRUE.

Defects can fall into a variety of categories, including:

  1. Defective product management
  2. Defective requirements / specification
  3. Defective coding
  4. Defective implementation

Bugs are synonymous with the third category – defective coding – either through mistranslation of requirements or syntactical or logical errors of the code itself.

As a result, there exists a semantic gap between the way users use this word (typically, synonymous with defect) and how programmers use this word (to mean a coding failure).  From this misunderstanding grave clashes often arise with devastating results.

Myth 2: Any time the software doesn’t do what the customer expects, it’s a defect.

This Myth is TRUE.

However, not everyone who purchases the software is a customer.

WTF you say?

Joe the Village Idiot wants a piece of software that will help him perform some useful business mathematics for his junk-removal business.  He wants to analyze some financial numbers arranged in columns, to understand their relative proportion to some other numbers arranged in a nearby column.  Hopefully the software will help him compute totals, averages, and ratios.  He’d also like it if the software would produce a presentation-quality pie chart.

After hasty and slipshod research, Joe purchased the software he thought would do the job best: Sybase e-Biz Impact.

Shortly after installing Sybase e-Biz Impact on his laptop, Joe the Village Idiot found himself unable to enter the numbers he collected into columns, so he called for assistance.  Following is an excerpt from the communication he had with the Sybase helpdesk:

Helpdesk:  Thank you for calling Sybase.  My name is Chandra.  May I please have your Sybase customer number or qualifying registered product serial number?

Joe TVI: (mumbling) The serial number is 443-A943F-1192.

Helpdesk:  Thank you, one moment please.  Am I speaking with Joe?

Joe TVI: Yep, I’m Joe.

Helpdesk: Thank you, Joe.  How may I be of assistance today.

Joe TVI: OK, I installed e-Biz Impact on my laptop today.  I’m reading the manual and looking through the online help, and I can’t find where to enter my sales figures.

Helpdesk: (without missing a beat) e-Biz Impact will run on a laptop, but it is a server product.  Do you have a server you can install e-Biz Impact on?

Joe TVI: No…

Helpdesk: No problem for now, but I’ll make a note in your customer file to have one of our configuration specialists give you a call.  Let’s continue.  You say you can’t find where to enter your sales figures?

Joe TVI: No…

Helpdesk: E-Biz Impact is a systems integration product for hospitals that supports the HL7 patient information integration standard.  It isn’t concerned with sales figures.  Are you trying to integrate a patient information system?

Joe TVI: I haul off junk from people’s front yards.  I don’t even have medical insurance.

Helpdesk: Hmmm…

At this point, Chandra the Sybase Helpdesk Guy is faced with a dilemma.  Does he

  1. Collect information about what Joe was trying to do with the product at the time of failure and submit an issue report to the software development team, or
  2. Politely tell Joe that perhaps e-Biz Impact isn’t the best product for his needs, and send him out to pick up a copy of Microsoft Excel?

Now, clearly, this is a very silly example.  But it is illustrative of the issue I’m trying to raise.  Not everyone who purchases your software is your customer.

The term “marketing” is used almost universally to refer to the targeting of advertising at potential customers to achieve greater sales.  However, the chief problem facing people who truly understand the true art of marketing is not how to sell more product to those people one has identified as potential customers.  No, the chief problem is how to clearly identify those people who are not potential customers.

Obviously, Joe is not a potential customer for Sybase e-Biz Impact.  But what about these potential customers:

  1. the user who would like to use Gmail as an RSS aggregator
  2. the user who would like to use Cakewalk Sonar as a MIDI lighting controller
  3. the user who would like to use Excel as a database middleware tool
  4. the user who would like to use Microsoft Word to create and edit web pages

Examples 1-3 are great examples of things that people might want to do with certain tools because they seem like other tools that do similar jobs.  Lots of email clients serve double-duty as RSS aggregators.  Why not Gmail?  Sonar is an excellent MIDI recorder, why not use it to run MIDI-controlled lighting systems?  Excel can connect to databases and has a programming language, so why not turn it into a piece of middleware?

The answer to these questions has to do with business and product management strategy, the underlying technical competencies of the tools, the current customer base of the product, and other factors.  The resulting answer is that these potential uses of the product are considered undesirable by the company – in other words, Gmail isn’t in the RSS aggregation business, Sonar isn’t in the lighting control business, and Excel isn’t in the integration business.  When users who are trying to use these products for these purposes discover flaws related to their application of the tool, they aren’t finding bugs.  They’re generating noise and should probably be given their money back and steered towards a different product.

Example 4 is a great example of what happens when a company fails to say “Microsoft Word is not a web page editor”.  Microsoft at some point decided to allow users to publish their Word documents as HTML, and hilarity and torture promptly ensued.  I hope you’ve never tried this.  It’s terrifying.  And the worst part is this: all the failures of Word to properly produce good looking and well-formed HTML are now bugs, because Microsoft created the expectation that Word should be used as an HTML authoring tool.

This is all just another way of saying just because you can doesn’t mean you should.

Clearly, if we can identify someone who is our archetypical user, we can produce a piece of software that conforms to their every need.  The problem isn’t finding this person.  They abound.  The problem is knowing who isn’t this person, and deciding not to meet their needs.

This leads us to our next myth:

Myth 3: Any time the software doesn’t do what the customer wants, it’s a defect.

This Myth is FALSE.

But how does this differ from Myth 2, which was TRUE?

It’s in the words “want” versus “expect”.

If you’re truly a customer, and you have expectations about the product, then chances are good that every failure to meet expectations is some kind of defect, although maybe not a coding defect (a.k.a. bug).  Something happened that either (1) incorrectly set expectations, a.k.a. bad customer communication, (2) incorrectly identified how to meet expectations, a.k.a. bad requirements / spec definition, or (3) attempted to meet expectations a certain way and failed, a.k.a. bug.

In fact, the exact definition of the difference between a “feature request” and a “defect” is the exact difference between what a customer wants and what a customer expects.  If you want it, but don’t expect it, it’s a feature.  If you expect it, but it isn’t there, it’s a defect.  If you expect it, and it was supposed to be there, but isn’t or doesn’t work, it’s a bug.

Myth 4: Customers don’t care about the distinction between bugs, defects, and feature requests

This Myth is FALSE.

To be sure, some customers don’t.  Some customers don’t care if their children get enough to eat.  But many customers do care about the difference.  The more vested the customer is in the quality of the product, the more the customer will care about the distinction.

Who, after all, are public beta testers, if not customers who care about the difference between bugs, defects, and feature requests?

Who, after all, are open source developers, if not customers who care about the difference between bugs, defects, and feature requests?

One thing I’ve learned is that there are always a set of leading edge customers – customers that get it.  These are the most important people in the world to listen to when it comes to designing a better product.  And let me tell you – these customers really, really care about the difference between bugs, defects, and feature requests.

Again – there are plenty of customers who don’t care about these distinctions.  But many do care.  And we should care.  And we should damn sure care when the customer cares.

Conclusion

If you read my blog, something should be apparent about my style of software project / product management: I’m terribly customer-focused.  And I’m terribly interested in identifying defects and understanding root causes.

As a result, I’m very likely to identify lots of things as defects that most software managers would never call defective.  Most software management is middle management, and does not want to be held responsible for anything if they can avoid it.  As Jeff pointed out in his misguided article, categorizing problems as feature requests is a way many managers and developers avoid responsibility.

But I’m not like that.  I say, call ‘em all defects unless they can be proven otherwise.

Of course I think some of this comes back to the fuzzy term “bug”.  When a developer says “bug” she usually means “broken code.”  I too subscribe to this definition – to me, a “bug” is a coding failure which includes broken code as well as bad requirements translation / implementation.  But when a customer uses the word “bug” she often means “something that doesn’t do what I expect” which isn’t necessarily a bug.

Perhaps a better solution to just dropping the whole distinction between “bugs” and “feature requests” would be to drop the term “bugs” in favor of the term “defects”.  Because while we can argue if a particular defect is a bug or not, we can’t argue that it’s a defect.

Not if we’re honest.

It’s a Computer, You Can’t Expect it to… Count

So I’m deleting this really big folder from my USB hard drive, and I get the message from Windows:

image

which is odd, because I know there’s over 35,000 files totaling over 370 GB of data in that folder.

Ah, I see…

the count is growing.  Fortunately, only 5 seconds remain.

A few minutes later, over a minute remains:

which slowly counts down to 5 seconds, which remains the estimate for the next 15 minutes:

I know I’ve ranted about Vista on more than one occasion.  But this is ridiculous.

I understand that some operations may seem quick, and then turn out to be slow, foiling the accuracy of a progress bar.  This is not the case, however.  This progress bar is estimating time to completion without ever attempting to compute the actual size of the operation! The rate of file deletion is extremely consistent.  If Vista had begun by counting the number of files in the total set, then an accurate estimate could have been presented.

Oh well.  At least we have talking paperclips.

Coding Horror-ibly

I love Jeff Atwood’s blog.  But sometimes, I think he’s smoking crack.

His latest post, “That’s Not a Bug, It’s a Feature Request” gets it way wrong.  Regarding Bugs versus Feature Requests, Jeff writes:

There’s no difference between a bug and a feature request from the user’s perspective.

I almost burst out laughing when I read that.

Jeff goes on:

Users never understand the difference anyway, and what’s worse, developers tend to use that division as a wedge against users. Nudge things you don’t want to do into that “feature request” bucket, and proceed to ignore them forever. Argue strongly and loudly enough that something reported as a “bug” clearly isn’t, and you may not have to to do any work to fix it. Stop dividing the world into Bugs and Feature Requests, and both of these project pathologies go away.

I’m not sure what kind of “users” Jeff has to support, but in my universe, there is a very clear difference between “Bugs” and “Feature Requests” which users clearly understand, and which Jeff would do well to learn.

The difference is simple:

  1. Functionality that was part of the specification (i.e. something the user paid for) and which fails to work as specified is a Bug.
  2. Functionality that was not part of the specification (i.e. something the user has not paid for yet) is a Feature Request.

I confess that the vast majority of my experience comes from designing, creating, and supporting so-called “backoffice” applications for companies like VeryBigCo.  And believe me, at VeryBigCo, users know what they need to get their work done, and demand it in the application requirements.  If the application does not meet requirements during UAT, or stops performing correctly in use, then the application is defective.

But what about shrink-wrapped apps?  Do users understand the difference between a bug and a feature request?  In a recent discussion thread on the Sonar forum, a set of users were complaining that a refund is in order because the company focused on “New Features” instead of “Fixing Bugs” from the previous version.

So we can lay to rest the question of whether users think of Bugs and Feature Requests differently.  They do.  And users expect fixing bugs to take precedence over adding new features.  The next question is: should “bugs” and “feature requests” be handled differently from the development point of view?

Let me answer that question with another question: should all items on the applications to-do list be handled with the same priority?  Should Issue #337 (Need “Right-Click to Copy”) be treated with the same urgency as Issue #791 (“Right-Click to Copy” Does Nothing).

Apparently, in Jeff’s world, they should.  Bug?  Feature Request?  It’s all just “stuff that we need to change” so let’s just get right on that shall we? According to Jeff, if we just drop the distinction, then it all goes away.

But how can it?  In the first example, the application doesn’t have a “Right-Click to Copy” feature, which is a good idea, so we should add it.  In the second example, we committed to providing a “Right-Click to Copy” feature, and the user paid for it, but it isn’t there! Does Jeff really think the user is ambivalent to these two situations?  Not the users I know.

There is yet another point that Jeff misses: continuous improvement.  The fact is, if we aim to develop defect-free software, we have to start by understanding our defect rate.  The simple fact is that a Bug is a defect, and a Feature Request is not.  Ignoring the difference means ignoring the defect rate, which means you can forget about continuous improvement.

Furthermore, if you’re doing a really good job at continuous improvement, then you care about the distinction on a finer level and for an even better reason: it helps you understand and improve

  • defects caused by failure to transform stated requirements into product (“development defects”, a.k.a. “bugs”)
  • defects caused by failure to capture requirements that should have been caught initially (requirements and analysis defects that arrive as “feature requests” but which should be reclassified as “requirements defects”)
  • non-defect feature requests – good ideas that just happened to emerge post-deployment (the real “feature requests”)

The only valid point to emerge from the cloud of hazy ideas Jeff presents is that it is unconstructive to use the Bug versus Feature Request distinction as a wedge against users.  It is true that bedraggled development teams will often try to conceal the magnitude of their defectiveness by reclassifying Bugs as Feature Requests.  But this is symptomatic of a far larger problem of defective project / product management, not overclassification.

I’m reminded of an episode of The Wire, in which the department is trying desperately to reclassify homicides as unintentional deaths to drop the city’s murder rate statistics.  Following Jeff’s reasoning, the better solution would be to drop the terms “homicide” and “manslaughter” altogether.  “Stop dividing dead people into Homicides and Involuntary Manslaughters, and both of these human pathologies go away.”

Right, Jeff?

When SEO Isn’t Enough

I was looking for a power antenna replacement for my car today, and the top site returned for my search was Installer.com.

The web site is so horrific I refuse to shop there.  I opened it and immediately closed my browser like I’d accidentally clicked a pr0n link.  It’s like walking into a store with ultra-bright lights playing loud death metal, with salesmen shouting at me – I’d walk right out.

SEO is good, but someone needs to seriously reconsider this web site.

Google Trips on Video Chat Rollout

This morning Google announced its new video chat capability with an eye-catching link at the top of the Gmail window.  Clicking the link takes you to this page, where you can see a nifty video that makes video chat look pretty cool.  But the link to “Get Started” takes you to a dead page (http://mail.google.com/videochat).

The interesting thing isn’t that the service coughed up blood on its first day out.  The interesting thing is that, ten hours later, http://mail.google.com/videochat is still 404.

It can be hard to revive an overwhelmed application, but it’s really, really easy to put up a page to let users know what’s going on.

I wonder why Google is leaving this page 404?  It doesn’t inspire confidence.

Putting the “Perma” Back in Permalink

The DotNetNuke Blog module has had a checkered history with Permalinks.  The earliest versions did not use them, so old blog entries never had a Permalink created for them.  Instead, links to entries were generated programmatically, on the fly.

It’s been trouble ever since.

Permalinks were later introduced, but the old code that generated links on the fly was allowed to remain.  In theory, this shouldn’t cause any problems so long as everyone is using the same rules to create the link.  In reality, depending on how a reader navigated to a blog entry, any number of URL formats might be used.  A particular blog entry might reside at any number of URLs.

From a readers point of view, there is really no issue with an entry residing at various URLs.  But from an SEO perspective, it’s a bad idea for a given piece of content to reside at more than one URL: it dilutes the linkback concentration that search engines use to determine relevance.

It’s also a troubleshooting nightmare.  Since there are so many different places in the code where URLs are being created, if a user discovers an incorrect or malformed URL, the source of the problem could be any number of places.

Finally, it’s a maintenance annoyance.  If you are publishing content using the blog, you don’t want URLs that change.  You want the confidence of knowing that when you publish a blog entry, it resides at one URL, and that URL is reasonably immutable.  The old system that generated URLs on the fly was subject to generating different URLs if there were various ways for users to navigate to the blog.

The Permalink Vision

The Blog team has a vision of where we want to take URL handling:

  1. All Blog entries should reside at one URL only (the Permalink).
  2. The Permalink URL for the entry should be “permanently” stored in the database, not generated “on the fly”.
  3. The Permalink should be SEO-friendly.
  4. Once created, the system will never “automatically” change your Permalink URLs for you.

We’ve come really close to achieving this vision in 03.05.x.

With the 03.05.00 version of the Blog module, we have undertaken an effort to ensure that the Permalink (as stored in the database) is always used for every entry URL displayed by the module.  After releasing 03.05.00 we discovered a few remnants of old code, and believe that as of the 03.05.01 maintenance release we will have ensured that all URLs pointing to entries are always using the Permalink stored in the database.

But there was a problem with changing all the URLs to use the Permalink stored in the database.  Since old versions of the Blog didn’t generate Permalinks (and some generations generated broken Permalinks) how could we safely use Permalinks from the database for all entry URLs?  The answer was to force the module to regenerate all the Permalinks on first use.  When you first use the Blog module, it will automatically regenerate all of your Permalinks for the entire portal, ensuring that the database is correctly populated with the appropriate URLs for each entry.

The decision to force all users to regenerate their Permalinks was a measured one.  Obviously, automatically forcing Permalink regeneration violates the third rule listed above, and theoretically could result in URLs for some entries to “move around” depending on how broken their Permalinks were.  But we believed that we required a one-time fix to get all entries on the new Permalink approach, and that this approach was only likely to “move” entries that had truly broken Permalinks in the first place.

Going forward we are confident that this represents the best approach to finally resolving the Permalink issue once and for all.

SEO-Friendly URLs and Permalinks

With version 03.05.00, we introduced SEO-friendly URLs that change the ending of our URLs from “default.aspx” to “my-post-title.aspx”.  We also introduced a 301 redirect that automatically intercepts requests for entries at the old “unfriendly” URL, and redirects to the new “friendly” URL.

When you install 03.05.00, it will by default still be using the old, “unfriendly” URLs.  If you want SEO-friendly URLs, you must enable them using a setting found in Module Options.

When you change the setting, only your new posts will use the new SEO-friendly URLs.  This is consistent with the Third Rule: you shouldn’t click an option and suddenly have all of your existing URLs changed for you.  If you want to make your old entries SEO-friendly, you must change the option, then use the “Regenerate Permalinks” option to apply the change to all entries.

A Couple of Issues

As I mentioned earlier, after the release of 03.05.00, we discovered a few areas in the code where the system was still generating URLs “on the fly” instead of using the Permalink.  So, if you’re using 03.05.00, and change the “SEO-Friendly” setting, you will discover that some of your existing URLs do, in fact, change to the new format.  This is a bug that is being corrected in 03.05.01.

There is one other way that a Permalink URL might change unexpectedly.  If you use the SEO-friendly URL setting, the module uses the post title to create the “friendly” portion of the link.  If, after you post an entry, you change its title, the URL will change.  Fortunately, links to the old URL will be caught by the 301 handler and redirected correctly.  This problem will not be corrected in version 03.05.01 but will probably remain until version 4.

Thoughts About Version 4

Version 4 of the Blog module is still on the back of a cocktail napkin.  No hard and fast decisions have been made yet about its feature set.  But I will preview where I think version 4 might go, at least as regards Permalinks and SEO-friendliness.

In version 4, I believe we will introduce the concept of a “slug” to the blog module.  A slug is simply a unique, SEO-friendly text string that is used to create a portion of a URL and is unchangeable except by the blog editor.  So, for example, given the URL http://www.mysite.com/tabid/109/entryid/302/my-post-title.aspx, the slug is “my-post-title”.

How are slugs different from what we have today?  The only difference is that today, the string “my-post-title” is generated automatically from the title, and if the title changes, the string changes.  With a slug, the string would not change automatically if the title changes, but could only be changed manually.  Slugs ensure that once an entry is posted, it stays put unless the publisher expressly decides to move it.

If we do deploy slugs, then there will have to be a few other changes.

First of all, the entire point of using slugs is that, once created, they can only be changed manually.  That means that the “Regenerate Permalinks” functions will have to be removed.  Once each entry has a slug, it can’t be “regenerated” programmatically.  The very idea of “regenerating” becomes moot.

Secondly, the point of a slug is to provide the SEO-friendly ending to each URL.  It presumes that the blog is “SEO-friendly”.  If you aren’t “SEO-friendly” there is no slug.  So for version 4, we may make “SEO-friendliness” mandatory and force it on all blog entries, old and new.

“But wait!” you cry.  “I thought that the point of Permalinks was to ensure that the system would never again change my URLs, and here you are saying that in a future version, you’re going to change all my URLs whether I like it or not!”

Well, yeah.  Guilty as charged.

First off, think of this as the very last step in achieving SEO-friendly Permalinks that are truly and finally “perma”.  Once we achieve SEO-friendly slugs, we have made it all the way to the goal.  And this is really the only way to get there, at least, the only way that is easy to support and not confusing to the end-user.

Secondly, the 301 redirection built into the module should ensure that the transition from old URL to SEO-friendly slug is completely transparent to all users and to search engines.  All the old links will work, and they will correctly report the move to search engines, which will update themselves accordingly.  Thousands of Blog module users are already testing this in version 03.05.x, and I believe that by version 4 we will be confident in this approach.

Of course, all of this is speculative, since version 4 isn’t even in the design stage yet.  But I hope that this information helps illuminate how the Blog team is thinking about the module and where it is likely to go in the future.  And, as usual, your feedback is highly encouraged.

Taxonomy and SEO

Taxonomy is one of the least understood weapons available for SEO.  We all know the basics of effective SEO:

  • URLs constructed with relevant terms, avoiding parameterization
  • Each page can be accessed by only one URL
  • Effective use of keywords in the title tag
  • Use of keywords in H1 tags
  • Links back to the page from other pages

How does taxonomy fit into all of this?

I started a webzine in 1998 called ProRec.com.  I built a custom CMS to run it, and spent a few years on SEO back before there was something called “SEO”.  In fact ProRec predates Google.  By the spring of 2000, ProRec consistently ranked in the top 10 search results on all relevant terms, usually in the top 3.  Due to many factors, some beyond my control, ProRec went dark in 2005 and was relaunched on DotNetNuke’s Blog module in 2007.  It no longer enjoys its former ranking glory, but I hope to use the lessons I learned to improve the Blog module in future versions.

One of the lessons I learned was the importance of effective use of taxonomy on SEO.  Designing and properly using effective taxonomy solves several problems:

  1. Populates META tags appropriately
  2. Encourages or enforces consistent use of similar keywords across the site
  3. Forms basis for navigation within the site, linking related pages
  4. Forms the basis for navigation outside the site, linking to other related information

Let’s look at these one at a time.

Populating META Tags

It’s true that META tags are not as important to search engines as they once were, but they are still used, and therefore still important.  Most blogging systems will take the keywords entered as Category or Tags and use them as META tags.  If you’re using DotNetNuke’s blog module, however, you’re out of luck.  The system simply doesn’t comprehend any kind of taxonomy and doesn’t let you inject keywords into the META tags except at the site level.  Opportunity missed.

When it comes to content tagging, a structured taxonomy (categories) offers benefits over ad-hoc keywords (tags).  The obvious reason is that a predefined and well-engineered taxonomy is more likely to apply the “right” words since a user manually entering tags on the fly can easily be sloppy or forget the appropriate term to apply.   The less obvious reason is that as a search engine crawls the site, it will consistently see the same words over and over again used to describe related content on your site.

Why is it important for the search engine to see the same words over and over again?  Because “spray and pray” (applying lots of different related words to a given piece of content) doesn’t cut it.  You don’t want to be the 1922th site on 100 different search terms.  You want to be the #1, #2, or #3 site on just a few.

So think of a search engine like a really stupid baby.  Your job is to “teach” the baby to use a few important words to describe stuff on your site.  Just like teaching a human, the more consistent you are, the more likely the search engine is to “learn” the content of your site and attach it to a small set of high-value terms.

Enforcing Keyword Usage

One of my main complaints about “tags” versus “categories” is that tags added to content on-the-fly tend to be added off the top of one’s head.  That’s fine for casual bloggers who just want to provide some simple indexing.  But if you are a content site with a lot of information about some particular subject, chances are that tagging like this can get you into trouble.  The reason for this is because on-the-fly tags often inadvertently split a cluster of information into several groups because two or three (or more) terms will be used interchangeably instead of just one.

Consider a site with a well-defined and structured taxonomy.  Let’s consider a very common application: a photography site primarily covering reviews of cameras and photography how-tos.  A solid taxonomy structure would probably include four indexes:

  • Manufacturer (Canon, Nikon, Lumix, etc..)
  • Product Model (EOS, D40, TZ3, etc..)
  • Product Type (DSLR, Rangefinder, micro, etc..)
  • Topic (Product Review, Lighting, Nature, Weddings, etc..)

Generally, the product reviews would be indexed by manufacturer, product model, and product type, with the “Topic” categorized as “Product Review”.  How-tos would be indexed by their topic (“Weddings”) as well as any camera information if the article covered the use of a specific camera.  For example, an article called “How to Improve Low-Light Performance of the Lumix TZ3” might be indexed thusly:

  • Manufacturer: Lumix
  • Product Model: TZ3
  • Product Type: Compact Digital
  • Topic: High ISO

Having a system that prompts the user to appropriately classify each article ensures that the correct keywords will be applied.  Getting the manufacturer and model correct is probably pretty easy.  It’s harder to remember the correct product type (“Compact Digital” versus “Compact”).  And remembering the right topic is a real challenge (“High ISO” versus “Low Light” versus “Exposure” or any of a hundred other terms I could throw at it).  Moreover, the user must to remember to apply all four keywords when the article is created.

We can see the value of focused keywords from this example.  At a site level, relevant keywords are at a high abstraction level, like “camera review”.  It’s unrealistic to think a web site could own a top search engine ranking for such a broad term.  At the time of this writing, Google shows almost 14 million web pages in the search result for “camera review”.  But a search for the new Nikon laser rangefinder “nikon forestry 550” returned only 138!  An early review on this product with the right SEO terms could easily capture that search space.

Having a system with four specific prompts and some kind of list is essential to keeping these indexes accurate.  Ideally the system provides a drop down or type-ahead list that encourages reuse of existing keywords.

Creating a Navigation System

Here’s where it all starts to come together.  Once you have a big pile of content all indexed using the above four indexes, the next obvious step is to create entry points into your content based on the index, and to cross-link related content by index.

On ProRec, we had five entry points into the content:

  • Main view (chronological)
  • Manufacturer index
  • Product Model index
  • Product Type index
  • Topic index

Needless to say, when a search engine finds a comprehensive listing of articles on your site, categorized by major topic, it greatly increases the relevance of those articles because the engine is able to better understand your content.  Think about it: right there under the big H1 tag that says “High ISO” is this list of six articles all of which deeply cover the ins and outs of low-light photography.  It’s a search engine gold mine.  Obviously it also helps users navigate your site and find articles of interest, too.

My favorite part of the magic, however, was using the taxonomy to create a “Related Articles” list on each article.  Say you’re reading a review of a Lumix TZ3.  We can use the taxonomy to display a list of articles about other Lumix cameras as well as other Compact Digital cameras.  On ProRec this was even more valuable, because ProRec reviews (and how-tos) many different types of gear and covers a lot of different topics.  Go to a review of a Shure KSM32 microphone, and here’s this list of reviews of other mics.

The “Related Articles” list immediately creates a web interconnecting each article to a set of the most similar articles on the site.  Instantly the search engine is able to make much more sense out of the site.  And, of course, readers will be encouraged to navigate to those other pages, increasing site stickiness.

More SEO Fun with Taxonomy

Once the system was in place I was able to extend it nicely.  For example, I created a Barnes & Noble Affiliate box that used the taxonomy to pull the most relevant book out of a list of ISBNs categorized using this same taxonomy and display it in a “Recommended Reading” box on the page.  So you’re reading an article called “Home Studio Basics” and right there on the page is “Home Studio Soundproofing for Beginners by F. Alton Everest” recommended to you.  The benefit to readers is obvious.  But there are SEO benefits, too, because search engines know “Home Studio Soundproofing for Beginners by F. Alton Everest” only shows up on pages dealing with soundproofing home studios.  Pages with that title listed on them (linked to the related page on Barnes & Noble) will rank higher than those that don’t.

You can start to see how quickly a simple “tagging” interface starts to break down.  You need the ability to create multiple index dimensions (like product, product type, and topic) as well as some system to encourage or enforce consistent use of the correct terms.  Otherwise, you’re doing most of the work, but only getting part of the benefit.

Taxonomy, Blogging, and DNN

Obviously, most casual bloggers don’t want to be forced into engineering and maintaining a predefined taxonomy.  That’s why “tagging” became popular.  Casual bloggers want to be able to add content quickly and easily and anything that makes them stop and think is a serious impediment to workflow.  So you just don’t see blog platforms with well-engineered categorization schemes, and you definitely don’t see any that allow for multiple category dimensions.

In my article “Blog Module Musings” I wondered aloud about what sort of people really use DotNetNuke as a blogging platform in the traditional sense of the word “blogging”.  My guess is that most people using DNN as a personal weblog probably have some personal reason for choosing DNN instead of any of the free and easy tools readily available like WordPress or Blogger.  So I have a belief about DNN that it isn’t a good platform for a “blog” per se, but it’s a great platform for content management and publishing.  My guess is that the DNN Blog module has much greater utility as a “publishing platform” instead of a “personal weblog”.

As such, I think it makes sense that DNN’s publishing module should offer more taxonomy power than the typical blog.  I also think that it’s possible, using well-designed user interfaces, to make a powerful taxonomy easy to manage.  My experience with ProRec demonstrated this.  It was very easy to manage ProRec’s various indices, primarily because I had a fat client to provide a rich user interface.  With Web 2.0 technologies, we can now provide these user experiences in the browser.

Touch-A Touch-A Touch Me

Well, that didn’t take long.

HP is already rolling out its new line of multi-touch enabled PCs.  Take a look at the advertisement and see what you think.

Here’s what I foresee:  the thing is cool looking, and multi-touch is certainly popular.  So they’ll sell.  HP includes a touch-enabled application suite, which I’m guessing will suck generally compared with the applications it’s designed to replace.  Some people will use the suite, others won’t.  People who use a personal computer as a toy will like it, people who use it for work, not so much.

Here’s what they don’t show.  You have to put the thing close – in easy reach – so it won’t “sit right” for some people.  You’re always reaching for the screen, then back to the keyboard.  And really, most of the time, you’re using the mouse and keyboard.

I’m betting that the allure will fade.  But, then again, a lot of people thought that the mouse was a fad.

I’m interested in your opinions.  Check out the PC and post a comment.  Let me know what you think!

Blog Module Moving to Version 4

In a previous post I stated that the Blog module would offer an interim 3.6 release to provide users with a few more features before the team undertook the full-on rewrite to move the module to version 4.

Well, as it turns out, plans change.  The team has decided to go directly to version 4.  There will likely be a 3.5.1 release to patch up any bugs that surface after 3.5 is released, but no 3.6 “feature upgrade”.

This is really great news.  The team has grand plans for this module which are currently stymied by a few factors, including a lot of old deadwood in the code and poor developer productivity in the older VS 2003 environment.  Of course, the key reason is that DotNetNuke has officially left the .NET 1.1 environment so all new releases must be based on .NET 2.0.

New DotNetNuke MSDN-Style Help

Last night I was desperately seeking help for some DotNetNuke core classes, and I came up short.  Fortunately I was able to resolve my problem with a little help from Antonio, but I still wished I had a better help file available.

Well, today I discovered that Ernst Peter Tamminga has put together an MSDN-style help system for DotNetNuke.  Exactly what I was looking for.

If you do serious DNN development, this is a must-have.  Thanks Ernst!

Multi-Touch: Not the Future

Just read a great article about the future of Flash on the iPhone.  At its core the article is dead-on: the issue with running Flash apps on an iPhone isn’t technical, it’s business.  Apple wants to own the multi-touch UI paradigm and is fiercely guarding it.  Flash apps, written for the WIMP (Window, Icon, Menu, Pointer) UI metaphor, will break the seamlessness of the multi-touch experience on the iPhone and dilute the value proposition.  I think that’s a fair and true assessment.

About a year ago I wrote about the JazzMutant Dexter: a brilliant multi-touch mixing device for use with most popular DAW software.  On publishing it, I realized that there are a great many people who don’t understand the fact that multi-touch isn’t a technical issue, it’s a UI issue.  A lot of the comments on the Dexter review heralded the imminent arrival of multi-touch displays for the PC, at which time anyone could just “mix with their fingers” on a multi-touch screen using their current software.  The notion is absurd, unless one happens to have needle-sized fingers.

There is a notion out there in the Big World that one day, multi-touch screens are going to replace keyboards and mice.  It’s true that iPhones – and their multi-touch user interface – are compelling.  But if you think that multi-touch displays are going to replace the WIMP metaphor, you’re gravely mistaken.  They can’t.

There are many small issues that prevent the market from moving en masse to multi-touch devices across the board: too much screen real estate is lost with finger-sized controls, the economics of writing software for a UI that is only a fraction of the market never seem to make sense, etc..

Let’s assume all these hurdles can be overcome.  They can’t, but let’s assume they can.  There exists a basic ergonomic issue that trumps all other issues – one issue that, by itself, ensures that ubiquitous multi-touch devices are not going to replace the current desktop model.

Sit at a desk or table.  The ergonomically correct position for a display is in front of you, such that your eyes line up with the top of the display.  If that display is a touchscreen, where will your hands have to be all day?  Up.  No good.

Let’s assume you have a keyboard, which is – and is likely to remain – the most efficient form of data entry.  Ideally, it’s low – just above your lap.  What do you have to do with your hands if you want to manipulate an on-screen control?  Move them several feet to the display.  No good.

Perhaps you want to go whole-hog, and create a big 30”+ table-top display with an embedded keyboard, so your hands are kept with the screen and keyboard.  Where are your eyes?  Down.  In what position is your neck?  Bent forward.  No good.

The fact is, there are powerful ergonomic reasons why it is useful to separate the display and the data entry device.  The best position for your head is up.  The best place for your hands are down.  Workplace ergonomic experts know this all too well, and have the lawsuits to prove it.

Head up.  Hands down. So where do you put the multi-touch device?

Back in your pocket.

Blog 3.5.0 Set for Release

After a few months delay, the Blog team is set to release the 3.5.0 version of the DNN Blog module.

I won’t go into the details of the reasons behind the holdup.  Our team leader has done a good job of that here, if you’re interested.  Suffice to say, sometimes, there are circumstances beyond one’s control.

I am not sure at this point if there will still be a 3.6 interim, or if we’ll proceed directly to version 4.  I’m sure everyone knows my opinion!  At any rate, it’s good to be back on track.

ZuiPrezi: Yup Yup

Daniel Miller turned me on to ZuiPrezi, a nonlinear presentation environment built using Adobe Flex.

It’s really cool, but not just because it knocks PowerPoint back on its butt.  It’s cool because of what it suggests about the future of content presentation.

A lot of marketing types have abused Flash by building incomprehensible yet very nifty-looking sites with it.  Like Rapp.  They’ve improved their site, believe it or not.  Now it’s two pieces – a blog, which is nice but uninformative, and a list of a couple dozen phone numbers on a rather overdone map interface.  It’s up to you to know in advance that they are the world’s largest Internet marketing content provider, otherwise, you wouldn’t even know what business they’re in.  And forget about watching a demo reel, seeing a list of current clients, or getting any actual business related information.  It just isn’t there.  But this is an improvement – a few months ago I stumbled across their web site and couldn’t even figure out how to work it.  As in, literally couldn’t figure out what it did.

ZuiPrezi is so simple.  You use it to produce a landscape of information, then fly around that space according to predefined markers, or navigate the space ad hoc.  That’s it.  The information you place in this landscape is the same stuff you can put in a PowerPoint – text, graphics, photos, video, etc..  The only difference is that ZuiPrezi breaks out of the “pages” paradigm by placing the information onto a surface over which you fly, instead of paginate.

Where ZuiPrezi gets it right – at least in the demo presentations produced by Kitchen Budapest – is through the use of visual cues like boxes, text, and images to provide a meaningful landscape when you zoom completely out.  In other words, you can see the content of the entire presentation and immediately grasp a lot of context.

This is the future of content sites.

More to come…

Blog Team Announces Interim 3.6 Release

The DNN Blog team has announced plans to release an interim 3.6 release to provide some final changes before undertaking the effort to rewrite the code for the version 4.x release.

The 3.6 feature set has not been made official, but current plans are to add support for BlogML, tagging, 301 redirects, and custom RSS URLs.

All effort will be made to minimize scopecreep, since it is a high priority to move forward with 4.x, but we felt that these critical changes needed to happen sooner than could be provided by 4.x.

Please Wait While We Fumble Around in the Registry

I really like Windows Live Writer.  But what’s up with the installer?

Why should it take long to determine which Windows Live applications are installed?  Hey, Microsoft developers, I’ve got a new word for you: manifest.  Would it be too hard to just have a file that contains the Windows Live applications and their versions?  I’m sure it would take a lot less time to “Search”.

Oh, yeah.  A loooong time.  Just under 15 minutes.

Seriously, folks.  If you can’t write better apps than that, I have grave concerns for the future of your company.

More on Tagging

I stumbled across a bit of text that clarified an earlier discussion on tagging:

Hierarchical: indicates a parent-child (vertical) relationship like cat and dog are children of mammals)

Association: indicates a “similar to” (horizontal) relationship like mammals is similar to animals.

Bingo!  This is what people think of when they create categories and tags.  Categories are hierarchical, and tags are associative.  The problem is – they’re both right and wrong.

They’re right, because this is in fact what categories and tags provide.  But they’re wrong in the sense that all knowledge is hierarchical, because all human comprehension is based on sets, and sets are inherently hierarchical.

This proves an earlier point.  It isn’t the case that some knowledge is hierarchical and some isn’t.  It’s just that some topics are members of hierarchies that haven’t been defined yet.  “Mammals” isn’t similar to “animals”.  Mammals are animals.

“No problem,” you rejoin.  “That’s just a bad example.”  To which I reply: prove it.  Show me an example of an association that relates two topics yet isn’t part of a definable hierarchy.  By definition, you can’t, because the presence of an association automatically implies some set within which both topics belong.

Now, when it comes to pouring this into software, an obvious fact springs to mind: we can’t possibly be expected to have a perfectly complete taxonomy available for use within our publishing platform.  Instead, we need a system that is flexible enough to let us build as much or as little hierarchical structure as we need, and then to apply “associative” tags for topics that don’t fit into our structure.  Furthermore, it would be ideal if there was a way to “round up” topics that aren’t part of the structure, and fit them in ex post facto.

6 More Words to Avoid on your Resume

TechRepublic has a cute article listing 19 words that you should avoid using in your resume.  If you need this sort of advice, then it’s a good read.

However, I think they missed a few:

Microwaveable: your skills in reheating leftovers are probably not going to get you the job, unless you are, in fact, applying for a job as a microwave oven operator.

Lesion: nobody wants to see your scab, and, regardless of the macho factor, your wounds will not earn you enough pity to get the job.

Spandex: I’m sure you look great in your Speedo.  Don’t bring it to the office.

Nubby: nobody even knows what this means, so why bother?

Lotus Notes: two words that guarantee you will be summarily passed over for any job, probably including Lotus Notes Developer and Lotus Notes Administrator.  It doesn’t matter if you used Lotus Notes to cure cancer, the world has determined that it sucks, and has moved on.  Let go.

Did I Mention that Vista Sucks?

I needed to do a quick screen-sharing session with a couple of folks today.

In the past, I’ve always reached for MSN Messenger.  At my last client, Messenger was the default chat client, and since it has built-in application sharing, we used it daily for all kinds of tasks from troubleshooting code to figuring out where to eat.

For some reason, however, I got stuck in an infinite loop when I tried to use Messenger to share a web page.  Selecting “Application Sharing” from the “Start an Activity” menu, I was presented with this disturbing message:

Your invitation was not sent because you need the latest version of Messenger to use the Application Sharing feature. Please go to the Windows Live Messenger update site to install the latest version.

That was a strange message, because I’m running the latest version available, 8.5.x.  But, being a sucker, I clicked the provided link, which started up the Windows Live Installer, which promptly informed me that I was, in fact, running the latest version.

I tried to get help on the Windows Live site, but there is no help at all on the “Application Sharing” feature.  In fact, the closest item in the Help Table of Contents was this useless page.

Frustrated, I turned to one of the last remaining software companies that still seems to have a clue (Google) for an answer to my problem.  Google quickly provided this thoroughly helpful link, which, as you can see, explains that Application Sharing (as well as Whiteboard) are no longer available in Vista.  Instead, we have Windows Live Meeting, which is a Vista-only application (and is much more clumsy to use than Messenger).

Yet another place where I’ve bumped my head on Vista.  It’s not that it’s different.  It’s that it doesn’t work as well.

Or, in this case, AT ALL.

Exciting New Enhancements to the Blog Module Comments Section

Identification icons are quickly becoming a popular way for bloggers to encourage responsible use of blog comments.  A variety of solutions are available, all of which aim to provide useful benefits to the blog reader.

Identification Icons

Identification icons are quickly becoming a popular way for bloggers to encourage responsible use of blog comments.  A variety of solutions are available, all of which aim to provide useful benefits to the blog reader.

Identification icons allow blog readers to personalize their posts just like a forum avatar.  The additional personalization may encourage responsible posting as well as increased commenting and discussion.  Identification icons prevent users from impersonating one another, leading to more responsible posting.  They also may prevent flaming, since a user tied to an identification icon will have to take additional steps to obfuscate their identity.

Version 3.4.1 of the Blog module supports all popular identification options available today:

  • Gravatar
  • Identicon
  • Wavicon
  • MonsterID

Gravatar, or Globally Recognized Avatar, provides an easy service that allows users to upload an image avatar that follows them from site to site.  This simple solution ties the image to an email address, providing a very easy way for blog software to retrieve the image.

Identicons, Wavicons, and MonsterIDs are automatically generated images that can be used in place of a Gravatar in the event that the user does not want to create one.  These images are generated by a hash of the user’s email address (or IP address, in the event that the user chooses not to enter an email address).

One feature that may be unique to DotNetNuke is the ability for the user to instantly preview their image in the comments section before submitting their comments.  As soon as the user enters their email address and tabs out of the email field, their Gravatar (or other icon) will automatically display in a preview area.  As far as we know, this preview capability doesn’t exist in any other blogging platform.

Other Comments Changes

We’ve included a few other improvements to the comments area as well:

  • Users can now enter a website address with their comment.  This feature can be enabled or disabled by the blog owner
  • Blog owner can show or suppress unique comment titles
  • Blog owner’s comments have a unique CSS tag, enabling them to stand out from other comments

My Vista Field Report

Vista sucks.

Seriously.

What really amazes me is that, apparently, tens of thousands of people tested this operating system for an extended period of time, and it still made it out the door.

I’m not going into elaborate detail about the problems, which, while numerous, fall into two categories:

  • Performance
  • Usability

Performance

Vista performance is terrible. I am running Vista on a 3Ghz Pentium D, 2 GB RAM, 1 GB ReadyBoost, 200 GB main drive, 500 GB data drive, etc. etc. etc.. Not the world’s most modern computer, but still quite powerful. XP runs on it like a champ.

Here’s my benchmark. I run a NAnt compile script repeatedly on an app that I maintain. I’ll compile this thing 20, 30 times at a sitting. I was frustrated with the process because, on Vista, sometimes it would compile in 5-8 seconds, but about 40% of the time it would take 30-60 seconds. Very slow, and far more variable.

On XP? The same process runs in 6.8 seconds, with approximately .3 seconds variability. Like clockwork.

The reason for the slowness and variability is easy to guess: the problem is probably any number of the 89 processes or dozens of services running at any time in a minimal Vista install. The thing is so horribly overbloated. If even one or two are rude, then that could be the problem.

What I do know is that my hard drive is constantly spinning. I’ve complained about this many times in support forums. The answer is always the same: the Windows Search indexer will chew on the drives for a few days to weeks while it’s building the initial search index. This is a perfectly reasonable and understandable explanation, and I’m willing to accept it since the idea of Windows Search is possbily worthwhile.

However, six months and one red-hot disk drive later, nothing’s changed. I don’t know what Vista is up to, but it’s really working hard. Way too hard to just be idling. And that ReadyBoost memory key just stays lit up.

I figured I might have a bad install, so I switched back to XP for a few months, then, stupidly, reverted back to Vista. SSDD.

I don’t have time to test, fix, troubleshoot, or benchmark. I use the computer too much. But my near-constant use provides me with outstanding subjective measures. The user experience of Vista is very, very slow.

Conclusion: Vista performance is terrible.

Usability

Aero is beautiful. My video adapter is quite powerful, so Areo renders nicely with no performance hit. Lovely. But no improvement in usability.

The rearrangements in Vista are a setback. Did the “Add/Remove Programs” control panel really need to be changed to “Programs and Settings”? It was a lot easier to find near the top of the list. But the rearrangements are, overall, trivial annoyances that I would eventually overcome.

The UAC is a nightmare. Supposedly, after a while, you don’t have to deal much with the UAC. Clearly, these people don’t ever clean up the shortcuts on their desktop. Every deleted shortcut is first met with “Destination Folder Access Denied – Continue, Skip, Cancel” which, when Continued, requires a UAC authorization.

Yes, that’s right, I need Administrator privileges to delete a shortcut from the desktop.

I’m sure there’s some reason that is perfectly compelling to some propellerhead somewhere. But it’s stupid. I can delete an EXE from my desktop with no interruption at all. So why do I need Administrator permission to delete a shortcut to the same EXE?

This is just one usability example. There are others. The worst offender is not Vista itself but its user-spiteful sidekick, MIcrosoft Office 2007.

Let me get this straight. They decided to change the toolbar and the hotkeys in one fell swoop?

So I’m a hotkey user. Right off the bat, I’m grounded, because so many have changed in Office 2007. So I go hunting for the right button.

However, some genius decided that what I really need, what would be really great, is context-sensitive buttons, completely and utterly missing the beauty and purpose of the button bar.

How so?

If I already knew the context of the command I wanted, then I could just as easily have picked it off a drop-down menu as I could a context-sensitive button bar. The beauty of a button bar is precisely that it isn’t context sensitive. It’s immutable. The same things are always in the same place no matter what I’m doing in the application, unless I specifically modify the bar to suit me.

Which wouldn’t be so horrible if the hotkeys (which are also immutable) hadn’t changed at the exact same time. But they did, grinding my ability to edit Word docs to a complete halt.

Indenting, numbering, and other simple, critical word processing functions have, however, remained broken. No time to fix them, what with all the oh-so-important toolbar redesigning going on.

It would be laughable, if it wasn’t so damned painful.

So, it’s back to Windows XP and Office 2003 for me. At least until it’s time to upgrade.

To a Mac.

Blog Module Musings

A lot needs to happen to make the DotNetNuke Blog module truly competitive.  Part of the problem is that there are competing needs for the module:

  • Use as a “personal” weblog
  • Use as a publishing platform

Joe Blogger

Of course, the Blog module was originally meant to serve the needs of… bloggers, that is to say, people writing journal-style weblogs.  Like this one.  That’s why it’s called a BLOG module, stupid.  OK, but suffice to say, there are particular needs of a personal weblog application:

  • Easy to use, simple
  • No need for workflow tools
  • Most will be single-author
  • Great looking, easily skinned
  • All the coolest social networking yada yada
  • etc..

In other words, a personal weblog needs to compete effectively with WordPress, Blogger, TypePad, and other popular blogging tools by offering an app that works at least as well (which will be hard, considering that several of these are free, including the hosting).  To that end, some of the features that the Blog module needs to consider are:

  • Built-in skinning (perhaps a set of 5-10 built-in template skins)
  • Email-to-blog capability
  • Metaweblog support (already on its way)
  • Social networking support (already on its way)
  • Categorization & tagging

It might also be nice if there was a way to do a DNN “blog” install, in other words, a single installer that performed the basic DNN install as well as getting the basic Blog module installed and configured.  Of course, a DNN install is still not as simple as it ought to be, and until it is, there really is no point in refining the install process of the Blog module.  DNN itself is already a sufficiently high hurdle that most casual users will shy away from using it just for blogging.

Which raises an interesting question:  are DNN bloggers every really going to be casual users?

Casual DNN Users?

After all, how many people really run DNN just to operate a personal weblog?  Doesn’t the implicit power – and complexity – of DNN in and of itself filter out almost all casual bloggers?  I mean, if I just wanted to start blogging, there’s no way I’d use DNN.  I operate this blog on DNN because I’m already running several DNN sites.  And I still question my logic in setting this up as a DNN blog instead of using WordPress or Blogger.

Seems to me that for most Blog module users, what they have is a website, part of which is a blog.  Think about this.  If they’re running DNN, it’s very likely that they’re doing “other stuff” with it other than just running a blog.

Which raises some interesting points:

  • The blog may be much more likely to be multi-user
  • The blog may be a kind of substitute for the Announcements module or FAQ, providing company information
  • The blog may be a publishing platform more than a weblog

DNN Publisher

Which brings me to the other competing need for the Blog module – the publishing platform.  If you need to manage content – by which I mean significant amounts of printed material – in DNN, then the Blog module quickly becomes your only choice, short of purchasing a publishing tool.  Nothing else in the DNN module base comes close to meeting this need, with the possible exception of the Announcements module.

As a DNN consultant, I always advise against purchsing modules if it is at all possible to conform a preexisting base module to the need.  I see the base modules as part of the open-source draw of DNN, and while they may evolve more slowly than commerical modules, they’re likely to have good quality and ultimately stand the test of time better than commercial modules.  I want to stay on open-source code as long as possible.

That’s why I chose the Blog module for ProRec.com, instead of buying a module or building one of my own from scratch.  The fact is, it meets about 65% of my need.  Really, just barely enough to limp along.  I don’t really want it to look like a weblog, with the calendar and month list being the primary navigation tool.  But I’m willing to make do, because I get so much for free.  Free is good.

The needs of people using the Blog module as a publishing platform (like me) include almost all the needs of the casual bloggers, but add a few twists:

  • Increased need for workflow
  • Different (non-traditional) navigation
  • Better multi-author / multi-department support
  • Different “main page” support

This is by no means comprehensive, but hits the high points.  With only a few improvements, the Blog module becomes “DNN Publisher” – a flexible publishing platform.  Suddenly, this tool can support lots of publishing operations, specifically, content sites (like newspapers and magazines) and multi-department corporate sites.

Push and Pull

All this flexibility will come at the price of complexity.  I believe that with some elegant design, we can minimize the complexity and maximize the flexibility, but increased complexity is probably a given.  So there will be inevitable battles between the people who want to use the module as a simple blog platform, and others who want to use it as a more powerful publishing platform.

Which takes me back to that earlier question: how many people really run DNN just to operate a personal weblog?  Doesn’t the implicit power – and complexity – of DNN in and of itself filter out almost all casual bloggers?  Is it really reasonable to expect DNN to compete with a free WordPress account for the market of people seeking to journal about their trip to Spain?

If there are significant disagreements about the direction of the Blog module, I think there will need to be some sort of informed answer to these questions.  Perhaps a survey or some kind of market research will be in order.  At any rate, I intend to push the Blog module in the direction of “DNN Publisher”, because I think that’s it’s unique value and a better fit with likely DNN users, and if I take a few bullets, well, they’ll be neither the first nor the worst.