Software & IT

Hackintoshed!!

So I’m writing this with my new $350 Hackintosh netbook.

I learned about Hackintoshing a few months ago, and was intrigued. I love the Mac OS, but there are things about Apple that seriously bother me. iTunes? Can’t stand it. The closed nature of the Mac platform? Not so much. You have to buy a $2600 Mac Pro just to get an expandable computer. And the prices generally. Lordy.

I tried using a Mac as my main computer for a few days and gave up. It is a lovely operating system and a MacBook Pro is a very nice laptop, but the cost – about 2X of the (more powerful) Lenovo – and inability to live “natively” on it (being a Windows guy in Real Life) caused me to give up on it.

But I liked the Mac experience. OSX is a terrific operating system. It’s so clean. It’s delightfully Unixy. I’ve owned several Macs back in the day, and it has always bothered me when I go to someone’s Mac and don’t remember how to use it.

And then there’s these nifty netbooks everyone is running around with now. The form factor is intriguing. Tiny, lightweight, cheap, and powerful enough for most day-to-day tasks.

I finally saw one in real life at Stack Overflow DevDays, and was convinced. It was a Dell Mini 9 – universally recognized as the easiest, most compatible Hackintosh platform (apparently the 10v is also a very good Hackintosh). It essentially runs OSX natively, right out of the box, and supports it almost completely. Dell no longer makes the Mini 9, but you can pick up a refurb unit cheap. I got mine for $220, with free shipping. I already had a copy of Snow Leopard from my aborted attempt at Mac Ownership. I dropped a 64 GB RunCore SSD into it and set about installing OSX on it.

It was completely painless. I followed these simple instructions and in about an hour had the thing up and running. The only thing that didn’t work correctly was Sleep and Hibernate (the computer would hang when you tried to put it to sleep) which was resolved by installing the free SmartSleep utility from Apple which fixed the Sleep but not the Hibernate problem.

The main complaint – common to any computer with this tiny form factor – is the usability of the keyboard. It is cramped, and the apostrophe / quote key is in a terrible location. However it is usable – I am able to type at about 80% of the rate I achieve on my Lenovo (which may have the perfect keyboard). Productive, but not enjoyable. If you are a serious touch-typist then you will have more problems. I am sort of a four-fingered typist so I think that I am probably more adaptable to this keyboard.

I have read a few people who say that they can type better on an iPhone than the keyboard on a Mini 9. That is balderdash. The Mini 9 does take some getting used to, but it’s a lot faster than typing with one or two fingers. Some people have swapped the keyboard for the Euro / US version which trades smaller keys for a better key layout. I think it comes down to one thing: if you’re writing code, or a novel, or any other large text that makes heavy use of apostrophes and / or quotes, then the Mini 9 is going to be pretty frustrating. Otherwise, you should be able to make it work for you.

On the good side, the screen is small (1024×600) but lovely. It is bright and white and sharp and very pleasant to look at. And with 2 GB of RAM and a 64 GB SSD the computer is quite fast. Totally inadequate for serious CPU work like A/V, but for 90% of what I use a computer for, it’s just great. It will play videos nicely, too – and they look terrific on the LCD. I/Os are good – ethernet, VGA, 3 USB, audio, and an SD slot. It is the perfect travel companion.

It is also silent – has no moving parts at all – and cool. The bottom warms up a little but doesn’t ever get anywhere near “hot”.

Dell sells the Mini 9 with Ubuntu. Ubuntu is a great little operating system, and is nicely configured to be netbook-friendly on the Mini 9 – but it doesn’t compare to OSX. OSX may be the perfect netbook OS. I haven’t yet installed iLife on this computer, but I can see it coming.

And finally, there’s the cool factor. You’re running the best consumer OS money can buy, on a small, quick, nifty, and very cheap piece of hardware. It’s Mac-cool without the Mac-cost.

If you want a Mac netbook, you have a choice. You can wait for Apple to make one, or you can just Hackintosh a Dell Mini 9 or 10v.

Semantic Blogging Redux

A while back I posted something about WordPress’ taxonomy model.  At the time I thought it was clever and thought we should use something like it for the DotNetNuke Blog module.  Now, I’m less enamored with it.

Here’s why.

To recap, have a look at this database diagram:

The seeming coolness stemmed from the decision to make “terms” unique, regardless of their use, and to build various taxonomies from them using the wp_11_term_taxonomy structure.  So let’s say you have the term “point-and-shoot”, and you use that as both a tag and a category.  “Point-and-shoot” exists once in the wp_11_terms table and twice in the wp_11_term_taxonomy table – each entry indicating the term’s inclusion in two different structures.  This seems useful because the system “understands” that the tag “point-and-shoot” and the category “point-and-shoot” both mean the same thing.

But is that always a safe assumption?

Consider the case of a photo blog, where the writer is posting photos and writing a little about each.  This photographer has a professional studio, and also shoots portraits in public locations, as well as impromptu shots at parties.

This photographer has set up a category structure indicating the situation in which the photo was taken “Studio/Location/Point-and-Shoot” (meaning, an impromptu photograph) and another structure or set of tags indicating what sort of camera was used “Point-and-Shoot”, as opposed to “DSLR”.

Same term.  Two completely different meanings.  Use that term as a search filter and you will get two sets of results, possibly mutually exclusive.

And so – to truly be “semantic”, the term cannot exist independently of its etymology (as expressed in the category hierarchy) as WordPress attempts to implement.

The Angel of Death

So I decided last Wednesday to finally retire my aging desktop.  It died a peaceful, natural death.

The rest were not so lucky….

For some time I have wanted to replace my aging desktop + laptop combo with a single portable notebook that could serve double duty as an easy traveller as well as a desktop replacement.  I finally found a machine that met my needs well: the Dell Studio XPS 1340.
xps131_01The Studio XPS 1340 is small, weighing about 5.5 pounds, which makes it nicely portable.  And, in an affordable configuration offered at Best Buy, it sports a 2.4 GHz Core 2 Duo (1066 MHz FSB), 4 GB RAM, and a 500 GB 7200 RPM hard disk – all of which make it a reasonably strong performer.  It has a nice backlit keyboard, strong metal hinges, a tasteful design with leather accents, and other appointments that seem well thought-out.  And, at $899 from Best Buy, the package was irresistable.  This was the machine for me.

I disassembled my desktop to harvest the data drive out of it and set about getting it ready to eBay, then headed off to pick up my new computer at Best Buy.  Like all new PCs I purchase, my first step when I got it home was to delete the drive partitions and set up Windows sans bloatware.  By the time I had installed Vista (and updates), and Office 2007 (and updates), and Visual Studio 2008 (and updates), and SQL Server (and updates) and other apps, most of my day was gone.  Late that night I inserted a CD-ROM to hear an angry clicking sound from the slot-loaded drive, followed by a diminishing whirring noise.  Yep, the optical drive had failed.

Next day dawned bright and early, as I disappointedly headed back to Best Buy to return the dead machine.  This time I was buttonhooked by the Apple salesman – a slick, knowledgeable gentleman named Bruce.  Bruce wasted no time talking up the MacBook Pro and bashing Dell and Microsoft.  I’ve been saying for years that my next machine might well be a Mac.  It’s no secret that Apple’s building the best hardware out there, and that OSX is the best desktop operating system yet built.  It’s also not lost on me that the virtual machines available to run Windows apps have become robust and powerful, and are strong performers.  After almost an hour of brainwashing from Bruce – as well as the lure of the seductive aluminum lovelies on display – I dropped an additional $1600 and walked out of the store with a MacBook Pro and a copy of VMware Fusion.

It wasn’t 30 minutes before I was truly in love with the Mac (and Leopard).  It does so many things so well.  But 12 hours later – after installing Snow Leopard, Fusion (and updates), Vista (and updates), Office (and updates), et al – I was faced with the ugly truth that however slick and powerful the 2.8 GHz MacBook Pro might be, running Microsoft apps in VMware is still a clumsy and slow way to run a development environment. I’m sure that the MacBook Pro will outrun any Windows notebook when dual-booting Vista natively, but running Vista in a VM is definitely not as fast as running it natively on the 2.4 GHz Dell.  Sorry guys, as a Microsoft development environment, it isn’t as good.  If I could live in MacWorld and rarely use the Windows apps, it would be worth it.  It’s awesome.  But if you live in the Windows world (as I do), the MacBook ends up being a very, very expensive Windows machine.

So, next day.  Back the MacBook went.  I get working on the replacement Dell 1340.  It’s not as slick as the MacBook Pro but it’s pretty sweet.  And I have $1600 back in my pocket.

12 hours later, the thing up and died.  This time, the motherboard.

Bummer.  Another day lost.  That’s three days now that have been spent setting up (and returning) computers.

So, third time’s a charm, right?  Wrong.  That was the last Dell 1340 in all of North Texas.  Maybe all of Texas.

I’m not a deeply religious man, but sometimes I get the idea that God is sending me a clear sign.  Maybe I’m not supposed to have a new computer right this moment.  OK, I get that.

So I decide to go ahead with Plan A (getting rid of the old desktop) but figure I can drop an extra gig of RAM in the old notebook and make it last another year or so.  Plus, I got my hands on a Win7 install and from all i can tell, Win7 outperforms Vista.

So, I fdisk that puppy and install Win7 on it.  Late that night, as the Win7 install is wrapping up, the installer throws errors.  The computer reboots.  CHKDSK is running.  CHKDSK is not happy.  Yep, the hard drive has failed.  Another one bites the dust.  Three down.  Four including the original desktop that died a natural death.

OK.  Now I’m considering a career in home and garden, or perhaps food preparation.

I shake off these thoughts and resolutely turn my mind to positive thinking.  Life hands you lemons?  Make tarte au citron, that’s what I always say.

So I decide to drop the coin for a solid state drive for the notebook.  $300 later, and the notebook is now equipped with the 128 GB SSD from Crucial.  250 MB/sec read, 200 MB/sec write.  Holy Mother of God is this thing fast.  I’ll save that for my next blog post, but I was blown away by the speed.

So Sunday I spent the day reloading Win7 (and updates), installing apps (and updates) — you know the drill.  The computer was super fast now.  I rearranged my desk to be notebook-friendly.  Life is looking good.

Late Sunday night.  No, make that Monday morning.  1:30 AM.  I finish a last set of updates.  The computer reboots.

Why is CHKDSK running?

I think the SSD may be bad…

… I think I’m going to cry now.

True Believer

Joel Spolsky is a big fan of SSDs.  Even when he’s wrong, I like reading his stuff.  But when he’s right, he’s oh-so-right.

So, if you’re keeping up, I recently installed an SSD in my laptop.  This is the summary.

SWEET JESUS MY COMPUTER IS ON METH!

Now, this is not a serious benchmark review.  I”m just calling ’em like I see ’em here.  But this old notebook is now the fastest computer I’ve ever used.

Not really.  If I had to throw a big video rendering project, or a big compilation project at it, it would feel pretty slow.  It only has a Core Duo 1.7 GHz processor and 1 GB of RAM.  The ATI Mobility 1400 display adapter doesn’t completely suck for a notebook, but it’s no award winner either.

In fact the only fast piece of hardware in the whole system is drive.  This is one of – if not THE – fastest SSDs available: with 250 MB/sec reads and 200 MB/sec writes, this 128 GB model from Crucial is really, really damned fast.  About twice as fast as a pair of 10,000 RPM Raptors in RAID0.  In the bidness, we call that “crazy fast”.

So this three year old computer simply feels like the fastest machine I’ve ever used.  And I own a really fast machine – a quad processor box with 4 GB of RAM and a nice display adapter I built for my studio.  The laptop feels much, much snappier.

Click on an app, and it just opens.  Boom.  Open a file – no waits.  And the impact on the swapfile can’t be underestimated, either.  When you’re running a lot of apps – even when you have lots of RAM – Windows will use the swapfile heavily.  With this uber-fast drive, you almost never notice swapfile activity.  It just happens too fast.

Which goes to show you – most of the time we’re waiting on our PCs, we’re really waiting on our hard disk.  Don’t believe me?  Throw an SSD in your old PC and get back to me on that.  Like me and Joel, you’ll be a true believer.

Where Facebook Will Fail

Robert Scoble has an interesting (if dated) post called “Why Facebook has never listened and why it definitely won’t start now“.  It’s a good article with many great points.

He writes:

Zuckerberg is a real leader because he doesn’t care what anyone thinks. He’s going to do what he thinks is best for his business. I wish Silicon Valley had more like him.

I get his point.  To run a business effectively, it’s not so important to know what your customer thinks as it is to anticipate what they’re going to think.  And it’s clear that Zuckerberg has a vision and is running with it.

Here’s where I disagree with that vision.

Scoble (and Zuckerberg) have this vision of Facebook as a way to target advertising and make billions. It may well be possible to make billions targeting ads to Facebook users.

There is, however, a mistaken belief that there are high barriers to exit for Facebook users.  The argument goes: when all your friends are on Facebook, you’ll never leave Facebook.

Remember MySpace?

History shows that people will leave Facebook the moment – the instant – something appreciably cooler comes along.  The early adopters will try it, the connectors will promote it, and then the great masses will rush in droves to be a part of it.

It isn’t too technically difficult to create a new social networking site.  Plenty of people are doing it.  None of them are sufficiently better than Facebook to steal the market share.  Yet.

Facebook, however, now has a very, very serious liability: all those pesky ads.  Nothing is less cool in a social networking environment than ads.  I disagree firmly with Scoble on this one.  It’s like I’m having a personal conversation with three of my close friends at a party, and this annoying guy keeps poking his head in.  “Did you say you were buying a car?  You need to test drive the new Mazda 6!  It’s Car and Driver’s Car of the Year!”  Then moments later “You went to the dentist?  Did you know that the Oral B is preferred 3 to 1 by dentists for removing plaque?”

I’m trying to have a conversation here, buddy.  It won’t be long before I punch that asshole in the face.

When Facebook is making billions on ads, someone less interested in buying airplanes, yachts and islands can come along with a new social networking site that has no ads or limited ads.  An alternative that will be instantly cooler.

Once it starts, Facebook will be at a terrible disadvantage.  Once you’re making billions, it’s very, very hard to compete against a competitor who is willing to settle for mere hundreds-of-millions.  Their knee-jerk response will be to increase the ad penetration to keep up the revenue stream.

It will be too late for them when they realize that their revenue model is exactly what is turning off their customers.

Math, No. Set Theory, Yes.

Jeff Atwood couldn’t be more right when he says

I have not found in practice that programmers need to be mathematically inclined to become great software developers. Quite the opposite, in fact. This does depend heavily on what kind of code you’re writing, but the vast bulk of code that I’ve seen consists mostly of the “balancing your checkbook” sort of math, nothing remotely like what you’d find in the average college calculus textbook, even.

Exactly.  Programming – especially GUI-based web-centric software development of the sort that most people are up to these days – is much more a “right-brained” than “left-brained” activity.

Question for the group: is logic – especially set theory – more right- or left-brained?  Modern software development may not be highly mathematical, but it often requires heavy database design and optimization, where a strong aptitude in set theory is a big plus.

Bugs? Features? Defects? Who Cares? I Do.

There was a bit of debate on the issue of bugs versus features I raised in Coding Horror-ibly.  I tried my best to keep it concise, but apparently, didn’t explain myself well.  It seems worthwhile to clarify, Mythbusters-style.

Myth 1: All defects are bugs

This Myth is FALSE.  However, the converse, all bugs are defects, is TRUE.

Defects can fall into a variety of categories, including:

  1. Defective product management
  2. Defective requirements / specification
  3. Defective coding
  4. Defective implementation

Bugs are synonymous with the third category – defective coding – either through mistranslation of requirements or syntactical or logical errors of the code itself.

As a result, there exists a semantic gap between the way users use this word (typically, synonymous with defect) and how programmers use this word (to mean a coding failure).  From this misunderstanding grave clashes often arise with devastating results.

Myth 2: Any time the software doesn’t do what the customer expects, it’s a defect.

This Myth is TRUE.

However, not everyone who purchases the software is a customer.

WTF you say?

Joe the Village Idiot wants a piece of software that will help him perform some useful business mathematics for his junk-removal business.  He wants to analyze some financial numbers arranged in columns, to understand their relative proportion to some other numbers arranged in a nearby column.  Hopefully the software will help him compute totals, averages, and ratios.  He’d also like it if the software would produce a presentation-quality pie chart.

After hasty and slipshod research, Joe purchased the software he thought would do the job best: Sybase e-Biz Impact.

Shortly after installing Sybase e-Biz Impact on his laptop, Joe the Village Idiot found himself unable to enter the numbers he collected into columns, so he called for assistance.  Following is an excerpt from the communication he had with the Sybase helpdesk:

Helpdesk:  Thank you for calling Sybase.  My name is Chandra.  May I please have your Sybase customer number or qualifying registered product serial number?

Joe TVI: (mumbling) The serial number is 443-A943F-1192.

Helpdesk:  Thank you, one moment please.  Am I speaking with Joe?

Joe TVI: Yep, I’m Joe.

Helpdesk: Thank you, Joe.  How may I be of assistance today.

Joe TVI: OK, I installed e-Biz Impact on my laptop today.  I’m reading the manual and looking through the online help, and I can’t find where to enter my sales figures.

Helpdesk: (without missing a beat) e-Biz Impact will run on a laptop, but it is a server product.  Do you have a server you can install e-Biz Impact on?

Joe TVI: No…

Helpdesk: No problem for now, but I’ll make a note in your customer file to have one of our configuration specialists give you a call.  Let’s continue.  You say you can’t find where to enter your sales figures?

Joe TVI: No…

Helpdesk: E-Biz Impact is a systems integration product for hospitals that supports the HL7 patient information integration standard.  It isn’t concerned with sales figures.  Are you trying to integrate a patient information system?

Joe TVI: I haul off junk from people’s front yards.  I don’t even have medical insurance.

Helpdesk: Hmmm…

At this point, Chandra the Sybase Helpdesk Guy is faced with a dilemma.  Does he

  1. Collect information about what Joe was trying to do with the product at the time of failure and submit an issue report to the software development team, or
  2. Politely tell Joe that perhaps e-Biz Impact isn’t the best product for his needs, and send him out to pick up a copy of Microsoft Excel?

Now, clearly, this is a very silly example.  But it is illustrative of the issue I’m trying to raise.  Not everyone who purchases your software is your customer.

The term “marketing” is used almost universally to refer to the targeting of advertising at potential customers to achieve greater sales.  However, the chief problem facing people who truly understand the true art of marketing is not how to sell more product to those people one has identified as potential customers.  No, the chief problem is how to clearly identify those people who are not potential customers.

Obviously, Joe is not a potential customer for Sybase e-Biz Impact.  But what about these potential customers:

  1. the user who would like to use Gmail as an RSS aggregator
  2. the user who would like to use Cakewalk Sonar as a MIDI lighting controller
  3. the user who would like to use Excel as a database middleware tool
  4. the user who would like to use Microsoft Word to create and edit web pages

Examples 1-3 are great examples of things that people might want to do with certain tools because they seem like other tools that do similar jobs.  Lots of email clients serve double-duty as RSS aggregators.  Why not Gmail?  Sonar is an excellent MIDI recorder, why not use it to run MIDI-controlled lighting systems?  Excel can connect to databases and has a programming language, so why not turn it into a piece of middleware?

The answer to these questions has to do with business and product management strategy, the underlying technical competencies of the tools, the current customer base of the product, and other factors.  The resulting answer is that these potential uses of the product are considered undesirable by the company – in other words, Gmail isn’t in the RSS aggregation business, Sonar isn’t in the lighting control business, and Excel isn’t in the integration business.  When users who are trying to use these products for these purposes discover flaws related to their application of the tool, they aren’t finding bugs.  They’re generating noise and should probably be given their money back and steered towards a different product.

Example 4 is a great example of what happens when a company fails to say “Microsoft Word is not a web page editor”.  Microsoft at some point decided to allow users to publish their Word documents as HTML, and hilarity and torture promptly ensued.  I hope you’ve never tried this.  It’s terrifying.  And the worst part is this: all the failures of Word to properly produce good looking and well-formed HTML are now bugs, because Microsoft created the expectation that Word should be used as an HTML authoring tool.

This is all just another way of saying just because you can doesn’t mean you should.

Clearly, if we can identify someone who is our archetypical user, we can produce a piece of software that conforms to their every need.  The problem isn’t finding this person.  They abound.  The problem is knowing who isn’t this person, and deciding not to meet their needs.

This leads us to our next myth:

Myth 3: Any time the software doesn’t do what the customer wants, it’s a defect.

This Myth is FALSE.

But how does this differ from Myth 2, which was TRUE?

It’s in the words “want” versus “expect”.

If you’re truly a customer, and you have expectations about the product, then chances are good that every failure to meet expectations is some kind of defect, although maybe not a coding defect (a.k.a. bug).  Something happened that either (1) incorrectly set expectations, a.k.a. bad customer communication, (2) incorrectly identified how to meet expectations, a.k.a. bad requirements / spec definition, or (3) attempted to meet expectations a certain way and failed, a.k.a. bug.

In fact, the exact definition of the difference between a “feature request” and a “defect” is the exact difference between what a customer wants and what a customer expects.  If you want it, but don’t expect it, it’s a feature.  If you expect it, but it isn’t there, it’s a defect.  If you expect it, and it was supposed to be there, but isn’t or doesn’t work, it’s a bug.

Myth 4: Customers don’t care about the distinction between bugs, defects, and feature requests

This Myth is FALSE.

To be sure, some customers don’t.  Some customers don’t care if their children get enough to eat.  But many customers do care about the difference.  The more vested the customer is in the quality of the product, the more the customer will care about the distinction.

Who, after all, are public beta testers, if not customers who care about the difference between bugs, defects, and feature requests?

Who, after all, are open source developers, if not customers who care about the difference between bugs, defects, and feature requests?

One thing I’ve learned is that there are always a set of leading edge customers – customers that get it.  These are the most important people in the world to listen to when it comes to designing a better product.  And let me tell you – these customers really, really care about the difference between bugs, defects, and feature requests.

Again – there are plenty of customers who don’t care about these distinctions.  But many do care.  And we should care.  And we should damn sure care when the customer cares.

Conclusion

If you read my blog, something should be apparent about my style of software project / product management: I’m terribly customer-focused.  And I’m terribly interested in identifying defects and understanding root causes.

As a result, I’m very likely to identify lots of things as defects that most software managers would never call defective.  Most software management is middle management, and does not want to be held responsible for anything if they can avoid it.  As Jeff pointed out in his misguided article, categorizing problems as feature requests is a way many managers and developers avoid responsibility.

But I’m not like that.  I say, call ‘em all defects unless they can be proven otherwise.

Of course I think some of this comes back to the fuzzy term “bug”.  When a developer says “bug” she usually means “broken code.”  I too subscribe to this definition – to me, a “bug” is a coding failure which includes broken code as well as bad requirements translation / implementation.  But when a customer uses the word “bug” she often means “something that doesn’t do what I expect” which isn’t necessarily a bug.

Perhaps a better solution to just dropping the whole distinction between “bugs” and “feature requests” would be to drop the term “bugs” in favor of the term “defects”.  Because while we can argue if a particular defect is a bug or not, we can’t argue that it’s a defect.

Not if we’re honest.

It’s a Computer, You Can’t Expect it to… Count

So I’m deleting this really big folder from my USB hard drive, and I get the message from Windows:

image

which is odd, because I know there’s over 35,000 files totaling over 370 GB of data in that folder.

Ah, I see…

the count is growing.  Fortunately, only 5 seconds remain.

A few minutes later, over a minute remains:

which slowly counts down to 5 seconds, which remains the estimate for the next 15 minutes:

I know I’ve ranted about Vista on more than one occasion.  But this is ridiculous.

I understand that some operations may seem quick, and then turn out to be slow, foiling the accuracy of a progress bar.  This is not the case, however.  This progress bar is estimating time to completion without ever attempting to compute the actual size of the operation! The rate of file deletion is extremely consistent.  If Vista had begun by counting the number of files in the total set, then an accurate estimate could have been presented.

Oh well.  At least we have talking paperclips.

Coding Horror-ibly

I love Jeff Atwood’s blog.  But sometimes, I think he’s smoking crack.

His latest post, “That’s Not a Bug, It’s a Feature Request” gets it way wrong.  Regarding Bugs versus Feature Requests, Jeff writes:

There’s no difference between a bug and a feature request from the user’s perspective.

I almost burst out laughing when I read that.

Jeff goes on:

Users never understand the difference anyway, and what’s worse, developers tend to use that division as a wedge against users. Nudge things you don’t want to do into that “feature request” bucket, and proceed to ignore them forever. Argue strongly and loudly enough that something reported as a “bug” clearly isn’t, and you may not have to to do any work to fix it. Stop dividing the world into Bugs and Feature Requests, and both of these project pathologies go away.

I’m not sure what kind of “users” Jeff has to support, but in my universe, there is a very clear difference between “Bugs” and “Feature Requests” which users clearly understand, and which Jeff would do well to learn.

The difference is simple:

  1. Functionality that was part of the specification (i.e. something the user paid for) and which fails to work as specified is a Bug.
  2. Functionality that was not part of the specification (i.e. something the user has not paid for yet) is a Feature Request.

I confess that the vast majority of my experience comes from designing, creating, and supporting so-called “backoffice” applications for companies like VeryBigCo.  And believe me, at VeryBigCo, users know what they need to get their work done, and demand it in the application requirements.  If the application does not meet requirements during UAT, or stops performing correctly in use, then the application is defective.

But what about shrink-wrapped apps?  Do users understand the difference between a bug and a feature request?  In a recent discussion thread on the Sonar forum, a set of users were complaining that a refund is in order because the company focused on “New Features” instead of “Fixing Bugs” from the previous version.

So we can lay to rest the question of whether users think of Bugs and Feature Requests differently.  They do.  And users expect fixing bugs to take precedence over adding new features.  The next question is: should “bugs” and “feature requests” be handled differently from the development point of view?

Let me answer that question with another question: should all items on the applications to-do list be handled with the same priority?  Should Issue #337 (Need “Right-Click to Copy”) be treated with the same urgency as Issue #791 (“Right-Click to Copy” Does Nothing).

Apparently, in Jeff’s world, they should.  Bug?  Feature Request?  It’s all just “stuff that we need to change” so let’s just get right on that shall we? According to Jeff, if we just drop the distinction, then it all goes away.

But how can it?  In the first example, the application doesn’t have a “Right-Click to Copy” feature, which is a good idea, so we should add it.  In the second example, we committed to providing a “Right-Click to Copy” feature, and the user paid for it, but it isn’t there! Does Jeff really think the user is ambivalent to these two situations?  Not the users I know.

There is yet another point that Jeff misses: continuous improvement.  The fact is, if we aim to develop defect-free software, we have to start by understanding our defect rate.  The simple fact is that a Bug is a defect, and a Feature Request is not.  Ignoring the difference means ignoring the defect rate, which means you can forget about continuous improvement.

Furthermore, if you’re doing a really good job at continuous improvement, then you care about the distinction on a finer level and for an even better reason: it helps you understand and improve

  • defects caused by failure to transform stated requirements into product (“development defects”, a.k.a. “bugs”)
  • defects caused by failure to capture requirements that should have been caught initially (requirements and analysis defects that arrive as “feature requests” but which should be reclassified as “requirements defects”)
  • non-defect feature requests – good ideas that just happened to emerge post-deployment (the real “feature requests”)

The only valid point to emerge from the cloud of hazy ideas Jeff presents is that it is unconstructive to use the Bug versus Feature Request distinction as a wedge against users.  It is true that bedraggled development teams will often try to conceal the magnitude of their defectiveness by reclassifying Bugs as Feature Requests.  But this is symptomatic of a far larger problem of defective project / product management, not overclassification.

I’m reminded of an episode of The Wire, in which the department is trying desperately to reclassify homicides as unintentional deaths to drop the city’s murder rate statistics.  Following Jeff’s reasoning, the better solution would be to drop the terms “homicide” and “manslaughter” altogether.  “Stop dividing dead people into Homicides and Involuntary Manslaughters, and both of these human pathologies go away.”

Right, Jeff?

When SEO Isn’t Enough

I was looking for a power antenna replacement for my car today, and the top site returned for my search was Installer.com.

The web site is so horrific I refuse to shop there.  I opened it and immediately closed my browser like I’d accidentally clicked a pr0n link.  It’s like walking into a store with ultra-bright lights playing loud death metal, with salesmen shouting at me – I’d walk right out.

SEO is good, but someone needs to seriously reconsider this web site.

Google Trips on Video Chat Rollout

This morning Google announced its new video chat capability with an eye-catching link at the top of the Gmail window.  Clicking the link takes you to this page, where you can see a nifty video that makes video chat look pretty cool.  But the link to “Get Started” takes you to a dead page (http://mail.google.com/videochat).

The interesting thing isn’t that the service coughed up blood on its first day out.  The interesting thing is that, ten hours later, http://mail.google.com/videochat is still 404.

It can be hard to revive an overwhelmed application, but it’s really, really easy to put up a page to let users know what’s going on.

I wonder why Google is leaving this page 404?  It doesn’t inspire confidence.

Putting the “Perma” Back in Permalink

The DotNetNuke Blog module has had a checkered history with Permalinks.  The earliest versions did not use them, so old blog entries never had a Permalink created for them.  Instead, links to entries were generated programmatically, on the fly.

It’s been trouble ever since.

Permalinks were later introduced, but the old code that generated links on the fly was allowed to remain.  In theory, this shouldn’t cause any problems so long as everyone is using the same rules to create the link.  In reality, depending on how a reader navigated to a blog entry, any number of URL formats might be used.  A particular blog entry might reside at any number of URLs.

From a readers point of view, there is really no issue with an entry residing at various URLs.  But from an SEO perspective, it’s a bad idea for a given piece of content to reside at more than one URL: it dilutes the linkback concentration that search engines use to determine relevance.

It’s also a troubleshooting nightmare.  Since there are so many different places in the code where URLs are being created, if a user discovers an incorrect or malformed URL, the source of the problem could be any number of places.

Finally, it’s a maintenance annoyance.  If you are publishing content using the blog, you don’t want URLs that change.  You want the confidence of knowing that when you publish a blog entry, it resides at one URL, and that URL is reasonably immutable.  The old system that generated URLs on the fly was subject to generating different URLs if there were various ways for users to navigate to the blog.

The Permalink Vision

The Blog team has a vision of where we want to take URL handling:

  1. All Blog entries should reside at one URL only (the Permalink).
  2. The Permalink URL for the entry should be “permanently” stored in the database, not generated “on the fly”.
  3. The Permalink should be SEO-friendly.
  4. Once created, the system will never “automatically” change your Permalink URLs for you.

We’ve come really close to achieving this vision in 03.05.x.

With the 03.05.00 version of the Blog module, we have undertaken an effort to ensure that the Permalink (as stored in the database) is always used for every entry URL displayed by the module.  After releasing 03.05.00 we discovered a few remnants of old code, and believe that as of the 03.05.01 maintenance release we will have ensured that all URLs pointing to entries are always using the Permalink stored in the database.

But there was a problem with changing all the URLs to use the Permalink stored in the database.  Since old versions of the Blog didn’t generate Permalinks (and some generations generated broken Permalinks) how could we safely use Permalinks from the database for all entry URLs?  The answer was to force the module to regenerate all the Permalinks on first use.  When you first use the Blog module, it will automatically regenerate all of your Permalinks for the entire portal, ensuring that the database is correctly populated with the appropriate URLs for each entry.

The decision to force all users to regenerate their Permalinks was a measured one.  Obviously, automatically forcing Permalink regeneration violates the third rule listed above, and theoretically could result in URLs for some entries to “move around” depending on how broken their Permalinks were.  But we believed that we required a one-time fix to get all entries on the new Permalink approach, and that this approach was only likely to “move” entries that had truly broken Permalinks in the first place.

Going forward we are confident that this represents the best approach to finally resolving the Permalink issue once and for all.

SEO-Friendly URLs and Permalinks

With version 03.05.00, we introduced SEO-friendly URLs that change the ending of our URLs from “default.aspx” to “my-post-title.aspx”.  We also introduced a 301 redirect that automatically intercepts requests for entries at the old “unfriendly” URL, and redirects to the new “friendly” URL.

When you install 03.05.00, it will by default still be using the old, “unfriendly” URLs.  If you want SEO-friendly URLs, you must enable them using a setting found in Module Options.

When you change the setting, only your new posts will use the new SEO-friendly URLs.  This is consistent with the Third Rule: you shouldn’t click an option and suddenly have all of your existing URLs changed for you.  If you want to make your old entries SEO-friendly, you must change the option, then use the “Regenerate Permalinks” option to apply the change to all entries.

A Couple of Issues

As I mentioned earlier, after the release of 03.05.00, we discovered a few areas in the code where the system was still generating URLs “on the fly” instead of using the Permalink.  So, if you’re using 03.05.00, and change the “SEO-Friendly” setting, you will discover that some of your existing URLs do, in fact, change to the new format.  This is a bug that is being corrected in 03.05.01.

There is one other way that a Permalink URL might change unexpectedly.  If you use the SEO-friendly URL setting, the module uses the post title to create the “friendly” portion of the link.  If, after you post an entry, you change its title, the URL will change.  Fortunately, links to the old URL will be caught by the 301 handler and redirected correctly.  This problem will not be corrected in version 03.05.01 but will probably remain until version 4.

Thoughts About Version 4

Version 4 of the Blog module is still on the back of a cocktail napkin.  No hard and fast decisions have been made yet about its feature set.  But I will preview where I think version 4 might go, at least as regards Permalinks and SEO-friendliness.

In version 4, I believe we will introduce the concept of a “slug” to the blog module.  A slug is simply a unique, SEO-friendly text string that is used to create a portion of a URL and is unchangeable except by the blog editor.  So, for example, given the URL http://www.mysite.com/tabid/109/entryid/302/my-post-title.aspx, the slug is “my-post-title”.

How are slugs different from what we have today?  The only difference is that today, the string “my-post-title” is generated automatically from the title, and if the title changes, the string changes.  With a slug, the string would not change automatically if the title changes, but could only be changed manually.  Slugs ensure that once an entry is posted, it stays put unless the publisher expressly decides to move it.

If we do deploy slugs, then there will have to be a few other changes.

First of all, the entire point of using slugs is that, once created, they can only be changed manually.  That means that the “Regenerate Permalinks” functions will have to be removed.  Once each entry has a slug, it can’t be “regenerated” programmatically.  The very idea of “regenerating” becomes moot.

Secondly, the point of a slug is to provide the SEO-friendly ending to each URL.  It presumes that the blog is “SEO-friendly”.  If you aren’t “SEO-friendly” there is no slug.  So for version 4, we may make “SEO-friendliness” mandatory and force it on all blog entries, old and new.

“But wait!” you cry.  “I thought that the point of Permalinks was to ensure that the system would never again change my URLs, and here you are saying that in a future version, you’re going to change all my URLs whether I like it or not!”

Well, yeah.  Guilty as charged.

First off, think of this as the very last step in achieving SEO-friendly Permalinks that are truly and finally “perma”.  Once we achieve SEO-friendly slugs, we have made it all the way to the goal.  And this is really the only way to get there, at least, the only way that is easy to support and not confusing to the end-user.

Secondly, the 301 redirection built into the module should ensure that the transition from old URL to SEO-friendly slug is completely transparent to all users and to search engines.  All the old links will work, and they will correctly report the move to search engines, which will update themselves accordingly.  Thousands of Blog module users are already testing this in version 03.05.x, and I believe that by version 4 we will be confident in this approach.

Of course, all of this is speculative, since version 4 isn’t even in the design stage yet.  But I hope that this information helps illuminate how the Blog team is thinking about the module and where it is likely to go in the future.  And, as usual, your feedback is highly encouraged.

Taxonomy and SEO

Taxonomy is one of the least understood weapons available for SEO.  We all know the basics of effective SEO:

  • URLs constructed with relevant terms, avoiding parameterization
  • Each page can be accessed by only one URL
  • Effective use of keywords in the title tag
  • Use of keywords in H1 tags
  • Links back to the page from other pages

How does taxonomy fit into all of this?

I started a webzine in 1998 called ProRec.com.  I built a custom CMS to run it, and spent a few years on SEO back before there was something called “SEO”.  In fact ProRec predates Google.  By the spring of 2000, ProRec consistently ranked in the top 10 search results on all relevant terms, usually in the top 3.  Due to many factors, some beyond my control, ProRec went dark in 2005 and was relaunched on DotNetNuke’s Blog module in 2007.  It no longer enjoys its former ranking glory, but I hope to use the lessons I learned to improve the Blog module in future versions.

One of the lessons I learned was the importance of effective use of taxonomy on SEO.  Designing and properly using effective taxonomy solves several problems:

  1. Populates META tags appropriately
  2. Encourages or enforces consistent use of similar keywords across the site
  3. Forms basis for navigation within the site, linking related pages
  4. Forms the basis for navigation outside the site, linking to other related information

Let’s look at these one at a time.

Populating META Tags

It’s true that META tags are not as important to search engines as they once were, but they are still used, and therefore still important.  Most blogging systems will take the keywords entered as Category or Tags and use them as META tags.  If you’re using DotNetNuke’s blog module, however, you’re out of luck.  The system simply doesn’t comprehend any kind of taxonomy and doesn’t let you inject keywords into the META tags except at the site level.  Opportunity missed.

When it comes to content tagging, a structured taxonomy (categories) offers benefits over ad-hoc keywords (tags).  The obvious reason is that a predefined and well-engineered taxonomy is more likely to apply the “right” words since a user manually entering tags on the fly can easily be sloppy or forget the appropriate term to apply.   The less obvious reason is that as a search engine crawls the site, it will consistently see the same words over and over again used to describe related content on your site.

Why is it important for the search engine to see the same words over and over again?  Because “spray and pray” (applying lots of different related words to a given piece of content) doesn’t cut it.  You don’t want to be the 1922th site on 100 different search terms.  You want to be the #1, #2, or #3 site on just a few.

So think of a search engine like a really stupid baby.  Your job is to “teach” the baby to use a few important words to describe stuff on your site.  Just like teaching a human, the more consistent you are, the more likely the search engine is to “learn” the content of your site and attach it to a small set of high-value terms.

Enforcing Keyword Usage

One of my main complaints about “tags” versus “categories” is that tags added to content on-the-fly tend to be added off the top of one’s head.  That’s fine for casual bloggers who just want to provide some simple indexing.  But if you are a content site with a lot of information about some particular subject, chances are that tagging like this can get you into trouble.  The reason for this is because on-the-fly tags often inadvertently split a cluster of information into several groups because two or three (or more) terms will be used interchangeably instead of just one.

Consider a site with a well-defined and structured taxonomy.  Let’s consider a very common application: a photography site primarily covering reviews of cameras and photography how-tos.  A solid taxonomy structure would probably include four indexes:

  • Manufacturer (Canon, Nikon, Lumix, etc..)
  • Product Model (EOS, D40, TZ3, etc..)
  • Product Type (DSLR, Rangefinder, micro, etc..)
  • Topic (Product Review, Lighting, Nature, Weddings, etc..)

Generally, the product reviews would be indexed by manufacturer, product model, and product type, with the “Topic” categorized as “Product Review”.  How-tos would be indexed by their topic (“Weddings”) as well as any camera information if the article covered the use of a specific camera.  For example, an article called “How to Improve Low-Light Performance of the Lumix TZ3” might be indexed thusly:

  • Manufacturer: Lumix
  • Product Model: TZ3
  • Product Type: Compact Digital
  • Topic: High ISO

Having a system that prompts the user to appropriately classify each article ensures that the correct keywords will be applied.  Getting the manufacturer and model correct is probably pretty easy.  It’s harder to remember the correct product type (“Compact Digital” versus “Compact”).  And remembering the right topic is a real challenge (“High ISO” versus “Low Light” versus “Exposure” or any of a hundred other terms I could throw at it).  Moreover, the user must to remember to apply all four keywords when the article is created.

We can see the value of focused keywords from this example.  At a site level, relevant keywords are at a high abstraction level, like “camera review”.  It’s unrealistic to think a web site could own a top search engine ranking for such a broad term.  At the time of this writing, Google shows almost 14 million web pages in the search result for “camera review”.  But a search for the new Nikon laser rangefinder “nikon forestry 550” returned only 138!  An early review on this product with the right SEO terms could easily capture that search space.

Having a system with four specific prompts and some kind of list is essential to keeping these indexes accurate.  Ideally the system provides a drop down or type-ahead list that encourages reuse of existing keywords.

Creating a Navigation System

Here’s where it all starts to come together.  Once you have a big pile of content all indexed using the above four indexes, the next obvious step is to create entry points into your content based on the index, and to cross-link related content by index.

On ProRec, we had five entry points into the content:

  • Main view (chronological)
  • Manufacturer index
  • Product Model index
  • Product Type index
  • Topic index

Needless to say, when a search engine finds a comprehensive listing of articles on your site, categorized by major topic, it greatly increases the relevance of those articles because the engine is able to better understand your content.  Think about it: right there under the big H1 tag that says “High ISO” is this list of six articles all of which deeply cover the ins and outs of low-light photography.  It’s a search engine gold mine.  Obviously it also helps users navigate your site and find articles of interest, too.

My favorite part of the magic, however, was using the taxonomy to create a “Related Articles” list on each article.  Say you’re reading a review of a Lumix TZ3.  We can use the taxonomy to display a list of articles about other Lumix cameras as well as other Compact Digital cameras.  On ProRec this was even more valuable, because ProRec reviews (and how-tos) many different types of gear and covers a lot of different topics.  Go to a review of a Shure KSM32 microphone, and here’s this list of reviews of other mics.

The “Related Articles” list immediately creates a web interconnecting each article to a set of the most similar articles on the site.  Instantly the search engine is able to make much more sense out of the site.  And, of course, readers will be encouraged to navigate to those other pages, increasing site stickiness.

More SEO Fun with Taxonomy

Once the system was in place I was able to extend it nicely.  For example, I created a Barnes & Noble Affiliate box that used the taxonomy to pull the most relevant book out of a list of ISBNs categorized using this same taxonomy and display it in a “Recommended Reading” box on the page.  So you’re reading an article called “Home Studio Basics” and right there on the page is “Home Studio Soundproofing for Beginners by F. Alton Everest” recommended to you.  The benefit to readers is obvious.  But there are SEO benefits, too, because search engines know “Home Studio Soundproofing for Beginners by F. Alton Everest” only shows up on pages dealing with soundproofing home studios.  Pages with that title listed on them (linked to the related page on Barnes & Noble) will rank higher than those that don’t.

You can start to see how quickly a simple “tagging” interface starts to break down.  You need the ability to create multiple index dimensions (like product, product type, and topic) as well as some system to encourage or enforce consistent use of the correct terms.  Otherwise, you’re doing most of the work, but only getting part of the benefit.

Taxonomy, Blogging, and DNN

Obviously, most casual bloggers don’t want to be forced into engineering and maintaining a predefined taxonomy.  That’s why “tagging” became popular.  Casual bloggers want to be able to add content quickly and easily and anything that makes them stop and think is a serious impediment to workflow.  So you just don’t see blog platforms with well-engineered categorization schemes, and you definitely don’t see any that allow for multiple category dimensions.

In my article “Blog Module Musings” I wondered aloud about what sort of people really use DotNetNuke as a blogging platform in the traditional sense of the word “blogging”.  My guess is that most people using DNN as a personal weblog probably have some personal reason for choosing DNN instead of any of the free and easy tools readily available like WordPress or Blogger.  So I have a belief about DNN that it isn’t a good platform for a “blog” per se, but it’s a great platform for content management and publishing.  My guess is that the DNN Blog module has much greater utility as a “publishing platform” instead of a “personal weblog”.

As such, I think it makes sense that DNN’s publishing module should offer more taxonomy power than the typical blog.  I also think that it’s possible, using well-designed user interfaces, to make a powerful taxonomy easy to manage.  My experience with ProRec demonstrated this.  It was very easy to manage ProRec’s various indices, primarily because I had a fat client to provide a rich user interface.  With Web 2.0 technologies, we can now provide these user experiences in the browser.

Touch-A Touch-A Touch Me

Well, that didn’t take long.

HP is already rolling out its new line of multi-touch enabled PCs.  Take a look at the advertisement and see what you think.

Here’s what I foresee:  the thing is cool looking, and multi-touch is certainly popular.  So they’ll sell.  HP includes a touch-enabled application suite, which I’m guessing will suck generally compared with the applications it’s designed to replace.  Some people will use the suite, others won’t.  People who use a personal computer as a toy will like it, people who use it for work, not so much.

Here’s what they don’t show.  You have to put the thing close – in easy reach – so it won’t “sit right” for some people.  You’re always reaching for the screen, then back to the keyboard.  And really, most of the time, you’re using the mouse and keyboard.

I’m betting that the allure will fade.  But, then again, a lot of people thought that the mouse was a fad.

I’m interested in your opinions.  Check out the PC and post a comment.  Let me know what you think!

Blog Module Moving to Version 4

In a previous post I stated that the Blog module would offer an interim 3.6 release to provide users with a few more features before the team undertook the full-on rewrite to move the module to version 4.

Well, as it turns out, plans change.  The team has decided to go directly to version 4.  There will likely be a 3.5.1 release to patch up any bugs that surface after 3.5 is released, but no 3.6 “feature upgrade”.

This is really great news.  The team has grand plans for this module which are currently stymied by a few factors, including a lot of old deadwood in the code and poor developer productivity in the older VS 2003 environment.  Of course, the key reason is that DotNetNuke has officially left the .NET 1.1 environment so all new releases must be based on .NET 2.0.

New DotNetNuke MSDN-Style Help

Last night I was desperately seeking help for some DotNetNuke core classes, and I came up short.  Fortunately I was able to resolve my problem with a little help from Antonio, but I still wished I had a better help file available.

Well, today I discovered that Ernst Peter Tamminga has put together an MSDN-style help system for DotNetNuke.  Exactly what I was looking for.

If you do serious DNN development, this is a must-have.  Thanks Ernst!

Multi-Touch: Not the Future

Just read a great article about the future of Flash on the iPhone.  At its core the article is dead-on: the issue with running Flash apps on an iPhone isn’t technical, it’s business.  Apple wants to own the multi-touch UI paradigm and is fiercely guarding it.  Flash apps, written for the WIMP (Window, Icon, Menu, Pointer) UI metaphor, will break the seamlessness of the multi-touch experience on the iPhone and dilute the value proposition.  I think that’s a fair and true assessment.

About a year ago I wrote about the JazzMutant Dexter: a brilliant multi-touch mixing device for use with most popular DAW software.  On publishing it, I realized that there are a great many people who don’t understand the fact that multi-touch isn’t a technical issue, it’s a UI issue.  A lot of the comments on the Dexter review heralded the imminent arrival of multi-touch displays for the PC, at which time anyone could just “mix with their fingers” on a multi-touch screen using their current software.  The notion is absurd, unless one happens to have needle-sized fingers.

There is a notion out there in the Big World that one day, multi-touch screens are going to replace keyboards and mice.  It’s true that iPhones – and their multi-touch user interface – are compelling.  But if you think that multi-touch displays are going to replace the WIMP metaphor, you’re gravely mistaken.  They can’t.

There are many small issues that prevent the market from moving en masse to multi-touch devices across the board: too much screen real estate is lost with finger-sized controls, the economics of writing software for a UI that is only a fraction of the market never seem to make sense, etc..

Let’s assume all these hurdles can be overcome.  They can’t, but let’s assume they can.  There exists a basic ergonomic issue that trumps all other issues – one issue that, by itself, ensures that ubiquitous multi-touch devices are not going to replace the current desktop model.

Sit at a desk or table.  The ergonomically correct position for a display is in front of you, such that your eyes line up with the top of the display.  If that display is a touchscreen, where will your hands have to be all day?  Up.  No good.

Let’s assume you have a keyboard, which is – and is likely to remain – the most efficient form of data entry.  Ideally, it’s low – just above your lap.  What do you have to do with your hands if you want to manipulate an on-screen control?  Move them several feet to the display.  No good.

Perhaps you want to go whole-hog, and create a big 30”+ table-top display with an embedded keyboard, so your hands are kept with the screen and keyboard.  Where are your eyes?  Down.  In what position is your neck?  Bent forward.  No good.

The fact is, there are powerful ergonomic reasons why it is useful to separate the display and the data entry device.  The best position for your head is up.  The best place for your hands are down.  Workplace ergonomic experts know this all too well, and have the lawsuits to prove it.

Head up.  Hands down. So where do you put the multi-touch device?

Back in your pocket.