Putting out the back catalog. Here’s Track 1 from Rhythm/Pleasure 2. Free on Soundcloud.
I create content: I write, I shoot photos, and I create music. I also make the occasional video.
I want an online location where I can keep up with all my content, and my interaction with others.
My website – a WordPress blog I self-host – the one you’re reading now – is the only place that truly gives me the control I want over my content. With my blog, I can
- Create text posts with any length or formatting I like
- Upload photos at any resolution with my choice of viewers
- Upload music for download or insert Soundcloud or Bandcamp widgets
- Interact with my guests using comments or Disqus
- Integrate 3rd party content from other sites that offer feeds
- Maintain 100% creative control over the look, feel, format, and style
The problem is – and it’s a biggie – is that the now-dinosaur-like “blog” format is completely isolated from social media. If I post something here on the blog, a few dozen people will see it. Nobody really reads my blog. But if I post something there, on Google+, a few hundred or even a thousand people might see it. It might even go viral, and millions of people might see it. On my blog, there is a next-to-zero chance that any content will go viral.
Of course, I can do what Guy Kawasaki does: publish on my blog, and link back to my blog from social media. But by failing to bring the content actually into the social media stream, I’m losing a lot of potential readers.
Or I can do what guys like Robert Scoble do: post everything everywhere. Scoble is ubiquitous. I don’t know how he can keep up with it all. In the memorable words of Mick Jagger, “I just don’t have that much jam.”
Alternatively, I can migrate to the available social tools instead. I can post my text diatribes over on Google+, but I have no control over the formatting and the layout is terrible for anything longer than a few paragraphs. I can also post my photos there and that works, mmm, OK, at best. I can’t post music, but I can share videos (a terrible situation) if I upload them to YouTube first. I can interact, which is probably the best feature. But I have zero creative control over the look and feel of my content. And I can’t integrate with 3rd party tools like Instagram, Twitter, Tripadvisor, or Hipster where I also create content.
So I end up with most of my most important content – my long blog posts and my music – hosted outside Google+.
What I really want – what someone needs to figure out – is how to have my cake and eat it too. Allow me to have my content on my blog – give me full creative control over it – but also allow me to interact on my blog through social media.
Alternatively, allow me to do everything I can do with my blog on a social media platform: customize it, post anything on it, and integrate anything into it.
The closest thing out there, actually, is Tumblr. Tumblr offers a social platform that is rich in content and customization and strong in supporting “viral multimedia.” The two problems Tumblr has are:
- Almost zero support for interaction – the only real interaction on Tumblr is sharing others’ posts, and
- Almost zero support for long text, since 99% of the content on Tumblr is visual. It just doesn’t work well for long posts, like this one.
Let’s figure this problem out together! I know I’m not alone. What are you doing to combat this problem?
I think it is critical to spread the word of where Occupy Wall Street came from, because as it gains momentum, we are seeing many political groups trying to bend it to their wills.
Occupy Wall Street began as a single-issue protest. It started when Adbusters posted a message suggesting a protest whose central demand is that President Obama “ordain a Presidential Commission tasked with ending the influence money has over our representatives in Washington.” This is a broad-based demand that should (and did) unite people on all sides of the political spectrum, from ultra-Liberals to Tea Partiers. In fact, as many point out, the target of the rage should be Washington as much as Wall Street.
Now, we are seeing lists of “demands” from a variety of parties who claim to speak for the few thousand people participating in the protests. I am very skeptical of anybody who claims to speak for this group. The lists of demands – several have been floated, all quite different – range from fairly specific legislative proposals to more whacko rantings of ultra-leftists.
And the groups which have stepped in to participate all have their own unique agendas. Labor unions, for example, are supporting the cause – which is ironic, since labor unions definitely are part of “the influence that money has over our representatives in Washington.” What’s next, support from Exxon?
What really got me suspicious was when I found out that MoveOn.org was supporting the cause. In 1998 I joined Wes Boyd and MoveOn.org because it claimed to be an issues advocacy group focused on ending the impeachment of Bill Clinton. I was no big fan of Clinton, but I was furious about the impeachment and its resultant waste and misdirected politics. But instead of being a single-issue group focused on “moving on” from the impeachment, MoveOn.org was instead a PAC raising money for the Democrats.
Lo and behold, I had apparently signed up as a card-carrying member of the left wing of the Democratic party. That was hardly my intent. I just wanted the Republicans to get back to the business of the Contract for America and off the stupid and wasteful impeachment proceedings. I had been co-opted by a so-called “issue group” into a PAC for the Democratic party. Likewise, I suspect a lot of people occupying Wall Street are probably rather surprised at the demands that “they” are now supposedly advocating.
That’s what happens when a movement gains steam – people get out in front of it and try to use it for their own purposes.
So ask yourself: how can labor unions and MoveOn.org be in support of “ending the influence money has over our representatives in Washington” – they _are_ “the influence money has over our representatives in Washington!”
One of two things has happened / is happening. Either
- The entire Occupy Wall Street protest was intentionally organized by Adbusters to tap into the general anger and then co-opt the group into a hard-left movement, or
- Seeing the success of the protest, a bunch of hard-left activists are trying to co-opt the original goal of “ending the influence money has over our representatives in Washington.”
In his new book, “The Lights in the Tunnel,” Martin Ford postulates an interesting (if not novel) thought experiment: what if the Luddites were right?
I have to start by confessing: I have yet to read the book. I have only read this review of the book. And looking at my schedule, I may not have time to read the book. So my comments are not directed at the book, but at the synopsis presented by the reviewer.
The premise (according to the review) is that “the Luddite Fallacy will only remain a fallacy so long as human capability exceeds technological capability” and according to the analysis of his book, once that tipping point is reached, people will be unable to find work, and without jobs or purchasing power, the economic system will collapse.
On the surface, it makes sense. Only large corporations will be able to invest sufficient resources to fully automate hospitals with robot doctors, produce food entirely without human intervention, or run governments with robot bureaucrats. Over time, the means of production will be controlled by a small number of people who will aggregate weath, but with no jobs, there will be nobody to purchase products.
Here’s where this thesis falls apart: in a world where all work can be best performed by machines, the cost of a product is, essentially, the cost of the energy used to power the robots that provide the service.
If energy continues to be increasingly scarce, then the cost to automate becomes high relative to the cost of human labor. For example, people will always be cheaper than machines if oil is the only way we produce electricity and costs $500 a barrel. So the economics remain much the same as they are now – people will be used where they are cheaper, and robots will be used where they are cheaper – and the economy will move along.
So the premise – that the Luddite tipping-point is reached once machines achieve the technical capability of humans – is incorrect. For the tipping point to be reached, two things have to hold true:
1. Machines’ technical capability must exceed human capability (Ford’s premise), and
2. The cost to power the machines is low relative to the cost to “power” a human
But, if condition 2 hold true, then Ford’s thesis falls apart again. Here’s how:
Let’s postulate a world in which energy is so abundant it’s practically free, perhaps by having robots that operate on internal micro nuclear reactors or robotically-built multimillion-acre solar and wind farms. In this world, the cost to produce something by robots is, basically, free. After all, the cost of the raw materials in your cars is negligible. It’s the cost of transforming iron and sand and oil into steel, glass, and rubber that costs money. And the cost to transform something (raw iron to steel) is equal to the labor cost (the workers) plus the energy cost (the energy used to fire the smelters). In an all-automated world, the “labor” cost equals the energy cost. If energy is practically free, then the products will be practically free, too.
Viewed in this way, it’s much more a Utopian fantasy than a Luddite nightmare.
If you follow me, you know that I am quite enamored with Amazon’s EC2. Scalable, reliable, powerful, and cheap – it’s a revolution in computing.
The smallest and least expensive EC2 instance is the Micro instance. It’s perfect for a light-duty web server: it has low memory and CPU capability but is capable of bursting to two processors, giving it responsiveness when you need it. And Bitnami has the perfect partner for your Micro instance: a WordPress stack customized to live in the cramped space of the Micro instance.
What you get in the package is nice: a complete LAMP stack running on a simplified Ubuntu 10.04 server with WordPress preconfigured and ready to go. Bitnami conveniently puts the entire stack in a single directory – you can zip that directory and drop it on another server and with very little effort you’re up and running again.
There’s plenty of info on the Bitnami site, so if you’re interested in setting it up, head over and check it out.
Where I was left a bit in the dark was… backups.
My first instinct was to use an S3 rsync tool to sync the Bitnami stack to S3. There’s S3rsync, but that costs money, and I’m seriously committed to spending the smallest amount of money possible on my web server. So I passed and instead settled on using S3cmd instead.
Using S3cmd, I was able to write a simple script that performs the following:
- It stops the Bitnami stack temporarily (this is acceptable in my application)
- It ZIPs the contents of the Bitnami folder to a ZIP file that uses the date as the filename (2011-07-11.zip)
- It copies the ZIP file to an S3 bucket
- It restarts the server
As a once-a-week backup it worked pretty well. Backups were a little large, because they contained a full snapshot of the entire stack, but S3 storage is cheap, and it’s nice to have your entire stack in a single backup file.
However, occasionally, the ZIP process would crash the little Micro instance (HT to +Ben Tremblay for first noticing during a heated debate on his Google Plus page). So I started looking for another solution, and realized there is a much more elegant and powerful option: automated EC2 snapshots.
Turns out there are a number of different ways to skin this cat. I chose Eric Hammond’s ec2-consistent-snapshot script. Turns out this was a good choice.
Since the Bitnami Ubuntu 10.04 server was a bare-bones install, a number of prerequisites were missing, notably PERL libraries (DBI and DBD) etc. Fortunately all of the answers were already available in the comments section of Eric’s web page. For me, all I needed to do was:
sudo apt-get install make sudo PERL_MM_USE_DEFAULT=1 cpan Net::Amazon::EC2 sudo apt-get install libc6-dev cpan -i DBI cpan -i DBD::mysql
The first time I tried it, it worked. One line of code – in about 0.8 seconds I had taken a snapshot of my disk. In no time at all I had installed a CRON job to automatically snapshot my server.
EBS snapshots are always incremental (only the changes since the last snapshot are written to disk) and restore in a flash. I’ve done a restore and it takes just a few seconds to reinstantiate a machine. And the actual backup is absurdly gentle on the machine – the script runs in about a second. Bang! Instant incremental backup. It’s a miracle.
The script is designed to flush the database and freeze the file system so that the snapshot is performed in a “guaranteed consistent” state. Unfortunately, to freeze the filesystem, you have to be running XFS, and the Bitnami machine uses While I agree that it is important to quiesce the database prior to snapshotting, I don’t know that it is required to flush the filesystem, since EBS volumes are supposedly “point in time consistent”. Regardless, my web sites do so little writing to disk that it is inconceivable that my file system would be in an inconsistent state.
In short: *rave*.
As we all know, the music business is in a new era. People are decreasingly willing to pay for music and the business is decreasingly willing to fund its production. As a result, quality artists like Vanessa – who would have had no trouble securing a solid label deal in 1995 – are turning to new social models in order to fund the expense of record production.
Please support this endeavor by visiting Vanessa’s Kickstarter Campaign.
Over the past few days I’ve been performing a pretty significant WordPress migration for a set of sites that I have been hosting.
The source is a set of individual WordPress sites running on an small Amazon EC2 Windows instance. I migrated them to a multi-site installation running on a micro EC2 Linux instance.
Over the course of the conversion I learned a variety of lessons.
First, I learned that the WordPress multi-site (“network blog”) feature is still fairly half-baked. You have to be prepared to get your hands pretty dirty if you want to make it work.
I also learned to really appreciate the Bitnami WordPress Stack AMI. It allows you to spin up a fully-configured, ready to use Ubuntu LAMP / WP stack onto an EC2 micro instance with a minimum of fretting.
I will update this post with some details of the process for those interested. In the meantime – success is mine!
Today, Rocky Agrawal offerred an interesting article on Techcrunch, “Solving the Scoble Problem on Social Networks.” it’s a good read. The gist is that in certain social networks (for example, Google+) there are certain people whose presence in the stream actually ruin the value of the stream, even though their content is worth reading.
Rocky uses Robert Scoble as an example. Robert has so many avid followers that when he posts, there is so much commentary that his posts dominate the Stream, shouting out all other commentary. Rocky concludes that there is no choice for him other than to block Robert Scoble altogether.
As a solution, Google+ allows you to view the content from individual circles. This feature is useful, for example, to see only the posts from your family or a specific group of friends. But it doesn’t solve the problem of having your main Stream wrecked simply because you happen to follow Robert Scoble.
What Google+ needs is a way to filter the main Stream by excluding one or more circles. By curating a circle of “noisy” posters, it is then possible to easly “de-noise” the stream by deselecting only those circles. I call this solution “The Descobleizer”.
As you can see, I’ve filtered out the “noisy” elements of the stream by de-selecting my “Acquaintances” and my “Following” circles. What’s left is a non-noisy Stream of everybody else. This maintains the value of having a Stream as well as allowing me to still follow guys like Tom Anderson and Robert Scoble.
Several people made some good points in regard to my article on iPad vs. Windows 8.
The most salient one, and the one I keep hearing, is the comparison to iPod. It goes like this:
Yes, Apple only garnered a minority market share with the Macintosh and the iPhone. But with the iPod, Apple was able to create and hold a substantial majority market share by establishing such a strong brand identity that “iPod” became synonymous with “portable MP3 player.” Now, the iPad seems to be holding a majority market share as well by making itself synonymous with “tablet.” Therefore we should compare its trajectory to the iPod, not the iPhone or Macintosh.
The other salient argument goes like this:
Apple has a lock on the “high end” tablet market. The iPad is better conceived, designed, and constructed than its Android or Windows counterparts. Users really aren’t that interested in a marginally lower priced machine that offers lower design / build quality, and its hard to see how other manufacturers can “out-quality” the iPad, or if users really want a “better quality” tablet than the iPad.
I like argument #2 best.
The problem with argument #1 is that it ignores the market dynamics. Macintoshes, iPhones, and iPads all have one thing in common: they are pieces of hardware running an operating system. OK, technically, this is also true of the iPod, but only technically so. The OS of the iPod is more like an embedded firmware.
Apple’s minority market position with the iPhone and Macintosh stems from the fact that the OS and hardware are coupled. Apple competes not just with Microsoft (for the OS), but with a gazillion other PC manufacturers (for the hardware). It does this with phones as well, competing not just against Android but against every phone maker that produces an Android device. So Apple can sell more PCs than any one PC maker, and more phones than any one Android manufacturer, but against the market as a whole it remains a minority player, albeit a large, powerful one.
Well, tablets are no different from phones and PCs: they are a piece of hardware running an OS, and it is a matter of time before tablet makers are able to closely copy the hardware designs of the iPad and the software advantages of iOS and release an Android tablet that competes well. Will people buy it? Yes. Android has a majority market share in phones and a compelling tablet offering will appeal to that majority.
Windows is more of a wild card here. In my previous post, I pointed to the fact that corporate IT departments will be much more likely to adopt a Windows 8 tablet than an iOS or Android tablet since it is an OS which they already support and understand. I still think this is true.
Many people countered with the argument that with HTML5, it is irrelevant which device you support. I agree but remain skeptical whether corporate IT departments will develop mission-critical wireless HTML5 applications. Corporate IT is happy with hard-wired web apps, but when it comes to running a mission-critical app over 3G or 4G networks, I think that is a far more risky proposition.
If I was asked to develop a mission critical application that ran wirelessly over a 3G or 4G network, I would almost certainly develop a “fat app” that replicated its data with the mother ship and which could run at 100% during network unavailability. And, as a corporate IT developer, I would lean heavily on Windows as the platform of choice for developing that application, especially since the odds are very strong that the company already has a sizeable investment in technologies like .NET and MS SQL server.
If (and this is a very big “if”) Microsoft can deploy a compelling tablet version of Windows before the market has saturated, I think there is a good chance that they will capture significant corporate sales. As we’ve seen in the past, inability to penetrate the corporate market was a serious impediment to Macintosh and, for a while, also the iPhone. If Microsoft can execute, this is a strong opportunity for them to stay in the game.
I’ve been traveling since the arrival of Google+ and therefore have been forced to use it almost exclusively from my Android phone. I’ve enjoyed the Google+ app immensely – it’s a really good app – but there are a few features that, taken together, would significantly enhance the experience of using the app.
In short, the app designers need to focus on answering this question: “what if the user has to use the Android app (virtually) exclusively?” The app works great as an add-on to the web app. But a truly mobile user may be stuck with using the app exclusively for days if not weeks, and may bump his head on the tiny limitations of the app and give up. These enhancements would go a long way towards solving these problems.
Ability to Parse Links into Previews
In the web app, there are four ways to enhance your post – Add Photo, Add Video, Add Link, and Add Location. In the Android app, one of these – Add Link – is not available. When you add a link, it just gets pasted in, and Google+ doesn’t create the nifty thumbnail / summary of the page to encourage viewers to click through.
Ability to Edit Posts and Control Sharing
In the web app, you can edit and delete posts as well as control resharing. It is imperative that these functions be added to the Android app.
Ability to +Mention Someone Who Isn’t Already in Your Circles
When I comment on a post in the web app, the app is able to convert +mentions into hyperlinks. in the Android app, this only works if the user is in one of your circles. Otherwise, the app doesn’t prompt you with the correct name, nor does it create the hyperlink. This should work the same on the web or on the phone.
I’ve found notifications on the phone to be spotty at best. Usually it notifies me only after launching the app and reading the posts… not really a useful notification.
After using the Google+ application exclusively for a week, I feel confident that if it supported these features / functions, it would be where it really needs to be in order to keep people using Google+ when they’re away from their computer for extended periods.
There’s been a lot of buzz in the industry press recently about Windows 8, the new touch-centric Windows from Microsoft.
Much of the press has been understandably skeptical. Apple definitely hit a home run with the iPad, building it on top of the iOS mobile touch interface. Microsoft, instead, is building “up” from Windows by layering a new browser and application UI paradigm on top of existing Windows. It’s easy to see where Microsoft might stumble, and hard to see how Windows 8 could possibly approach the seamless elegance of iOS.
And, the truth is, it probably won’t.
And, the truth is, it probably won’t matter.
A History Lesson
The year is 1990. I’m sitting at my workstation in Classroom 2000 on the University of Texas campus in front of two state of the art machines: a 386-powered IBM PS/2 running OS/2 and Windows 3.0 and a Motorola 68030-powered Mac IIci.
I’m teaching a class of IBM Systems Engineers (a glorified term for salespeople) who have come to learn about desktop computers. In this class we’re learning about PostScript, but really, the whole exercise is to throw Macintoshes in their face to scare the hell out of them. And it works. More than once, you hear an IBM employee mutter, “we can’t win.”
But they did.
In designing the Mac from the ground up as a windowed operating system, Apple has the clear technical advantage. The machine is slick as hell: 32 bit architecture, peer-to-peer networking, 24 bit graphics, multitasking, and a beautiful, well-conceived UI. Conversely, in PC-land, there’s Windows running on top of 16 bit DOS: a veritable Who’s Who of Blue Screens of Death and a nightmare of drivers and legacy text-based apps running around.
And yet, Apple failed to capitalize on their obvious competitive advantages, barely growing their market share over the next 10-15 years.
Why? Because the largest purchasers of computers are corporations, and corporations purchased IBM / Microsoft as an extension of their current computing platform. In part this was out of ignorance of what Macintosh could do, in part it was due to specific shortcomings of the Macintosh platform – but those aren’t the reasons corporations failed to embrace Macintosh. The real reason Macintosh never broke through the corporate barrier was because it never made sufficient sense to throw out all the legacy apps and start over again on a new hardware and software platform.
Office applications are not the engine of the productivity boom. Word processors and spreadsheets don’t offer competitive advantage. Factory automation, enterprise resource planning, sales force automation, customer and supplier portals – these are the expensive and risky custom-built applications that drive competitive advantage. For that reason, you sometimes still see applications that remain GUI-less – you don’t screw with stuff that works – and oh by the way, throwing a nifty UI on an app like that can cost a fortune and offer negligible – even negative – payback.
So to synopsize our history lesson: Apple failed to sell to corporations because it never made good financial sense for those corporations to reinvent their line-of-business applications for a different platform. Apple established itself as a great consumer brand and carved out niches in media production and desktop publishing – markets that were not tied to traditional corporate IT. But because the corporate world used PCs, most individuals purchased PCs for the home, and Apple was unable to substantially grow its market share in spite of technical advantage and overall coolness.
We are now seeing the same history lesson repeat itself with the iOS-based iPad tablet going head-to-head against the next generation of Windows tablets. In order to create the ultimate tablet experience, Apple has adopted iOS as the application platform for the iPad. And while the iPad is a formidably slick and compelling machine, iOS is probably not the operating system of choice on which to develop mission critical corporate IT applications.
Enter Microsoft with Windows 8. Will it be clunky? Almost certainly. Will it fray around the edges? Yes. Will there be jarring experiences where the user drops suddenly and unexpectedly into the old mouse-based paradigm? Definitely.
But Microsoft can offer something that Apple can’t. There are thousands, maybe millions of line-of-business applications deployed with technologies like C++, .NET, Access, and SQL Server. Companies cannot and will not jettison them in order to rewrite for iOS. But they will extend them to a Windows 8 tablet.
Microsoft’s decision to layer a touch interface on top of Windows is the only logical decision. It’s the same decision they made in the late 1980s when they layered a GUI on top of DOS. With Windows, Microsoft retained the established customer base while expanding their market reach by extending, rather than reinventing their operating system. The business advantage outweighed the technical disadvantage. With Windows 8, they can do it again.
I think the decision is brilliant.
The Proof is in the Pudding
Now, we simply have to wait and see if Microsoft can deliver. That may be a stretch. Microsoft has a “hit-miss-miss” record with Windows. With Windows, it was not until Windows 95 that Microsoft pulled within reach of Apple, and only Windows XP was solid enough to truly compete technically. Microsoft cannot wait 10-15 years like it did with Windows to catch up.
I think that it’s fair to guess that Windows 8 will not be an iPad-killer, no matter how great it is. Fortunately, it doesn’t have to be an iPad-killer. It just has to establish a baseline of functionality and provide a suitable application development platform. Corporations will develop impressive line-of-business applications for the touch interface – specifically field-worker automation applications – if the platform is robust.
If compelling touch-based business applications can be deployed on Windows 8, it will have done its job: it will have convinced corporations that Windows can meet their needs for a touch-tablet computer, and Apple will be stymied in their attempt to finally break the barrier keeping them out of corporate America.
PS: I am writing this on my brand new, and very sweet, MacBook Pro.
Yep. It’s happened again:
Computerworld – LulzSec, a hacking group that recently made news for hacking into PBS, claimed today that it has broken into several Sony Pictures websites and accessed unencrypted personal information on over 1 million people.
The attack? A simple SQL injection attack. Most web sites built since 2002 have known how to defend against SQL injections.
“What’s worse is that every bit of data we took wasn’t encrypted,” the group claims. “Sony stored over 1,000,000 passwords of its customers in plaintext, which means it’s just a matter of taking it.”
Storing passwords in plaintext is simply negligent.
If customers experience identity theft as a result of this breach, you should expect a class-action lawsuit. These aren’t secure websites breached by a sophisticated attack. These are utterly inept programming decisions.
I have a bad, bad feeling that this is going to get a lot worse for Sony.
What’s even worse than all of this?
I own Sony stock.
I have finally done it. Please forgive me, Mom.
That’s right. I bought a Mac: the new quad I7 15.4″ MacBook Pro.
And what’s more: I’m switching to Pro Tools 9 from Sonar.
I’ve had a long love-affair with my Dell Mini 9 Hackintosh. The little sucker went with me everywhere. I used it as my travel computer. I used it in my keyboard rig. Small to the point of silly, and relatively inexpensive, it was the perfect travel machine.
Well, it got stolen in Mexico.
A confluence of needs had me wanting a new MacBook Pro anyway, and what better time than after the sudden loss of the Hackintosh. So I have finally taken the plunge for real.
More to come…
Hate to say I told you so, but in April 2009 I wrote:
The keyword to watch for now is #doubledip. Because with the recent uptick in the economy, the investing world is going to start looking for signs of a double-dip recession.
Now this article confirms my prediction:
The prices of single family homes in March dropped to their lowest level since April 2009, confirming a “double dip” because values are now below where they were since the housing market collapsed, according to a closely watched price index released Tuesday.
By now, everyone knows that Sony’s Playstation Network got hacked earlier this year. It’s a big mistake that shouldn’t have been made, but we all make mistakes. The key to Sony’s viability as a player in the online world is that it be able to learn from its mistake.
In a warning to users issued on Thursday, So-net said an intruder tried 10,000 times to access the provider’s “So-net” point service […] from the same IP address.
There is absolutely no reason why any online service should allow an intruder to attempt 10 unsuccessful login attempts from the same address, much less 10,000. This represents a complete failure to grasp the fundamentals of security, and any reasonable observer would have to conclude that Sony is completely security-blind and totally naive. You can expect many, many more stories like this to emerge unless the company adopts a complete reinvention of its online presence.