TV News as Wallpaper: Ubiquitous infotainment for the business traveller

A recent trip led me to a few days spent mostly in airports, hotels, hotel, and restaurants. A few days into the trip, I was having breakfast in a hotel restaurant. From where I was sitting, I could see three or four televisions mounted on the ceiling in the various corners of the large lobby restaurant. All of the televisions were tuned to CNN’s 24-hour news channel, Headline News. That morning, the image of choice was an enormous plume of smoke from a barge that had exploded in the harbour in New York. The black cloud unfolded on multiple screens, filling my peripheral vision like an electronic Warhol.

These images and others like them unfolded on screens in airport terminals, restaurants (obviously I wasn’t enjoying high cuisine), and hotel lobbies. Everywhere I went, the TV “news” was streaming images of bombings, war, and disasters.

For anyone who’s seen Michael Moore’s Bowling for Columbine, his most potent thread is not availability of weapons, or gun laws, but rather the culture of fear. There it was, the culture of fear – when it dawned on me: television news is a kind of electronic wallpaper.

While watching Frasier in the hotel room one right, a news ticker appeared at the bottom of the screen to inform us that a child had been abducted – that was it – a boy had been abducted. There were no other details. How can news that vague serve any purpose other than spreading fear?

That said, my American TV experience wasn’t entiredly negative. I discovered Ali G.

 

If a tree falls in the forest, and there’s no one there to blog it…

Being almost as North-by-North-East as you can get on this continent, I didn’t get to the South by South West conference. However, the host of the SXSW Web Awards ceremony, John Styn (this link leads to a man in pink briefs), put together a series of videos in the style of Apple’s Switch campaign.

The video on blogging is gold: “If a tree falls in the forest and there’s no one there to blog it, did it really fall?” – (14Mb Windows Media file).

 

802.11g doesn’t work (yet)

802.11 WiFi Standards ChartSeveral of my co-workers and I recently went on an IBM ThinkPad shopping spree and we decided it was time to go WiFi at our office.

We did some reading on the available protocols (I’ve worked up a helpful addition to this chart to the right). 802.11b is everywhere. 802.11a seems to be nowhere. 802.11g is faster and backwards compatible with 802.11b. G it is.

However, it turns out that the 802.11g specs are not finished and all products released as “802.11g” are based on draft specifications. What does this mean for you? Don’t buy 802.11g products yet. They don’t work yet.

We bought a Linksys Wireless-G Access Point. While they do clearly describe the product as “based on the Draft 802.11g Specification”, we naively assumed it would work. It doesn’t. Apparently it does work ok with Linksys’ own 802.11b spec client cards, but we all have different client cards.

Apparently we should have known better. We are obviously not the first people to run into this. The Register has article describing the situation. However, I thought some anecdotal evidence may be helpful: We bought a Linksys WAP54G Wireless-G Access Point and it did not work with our 802.11b cards (we’ve tried a variety of different cards: Cisco, Orinoco/Lucent, & Phoebe).

On the bright side, I’ve now got an 802.11b network at work and at home. This is one of those “Eureka” technologies – like when I first tried instant messaging. A recent trip to Ottawa showed about 20 networks in just a few blocks, about half of them open. For any local readers, look forward to more on open WiFi in Charlottetown (contact me if you are interested in participating).

 

Open Software Development Process

When you spend much of every day in front of a computer (which you probably do if you are reading this), you come to know the intricacies and quirks of your software much as a musician comes to know the feel of their favourite instrument. Sometimes the smallest bugs can become the biggest annoyance.

Internet Explorer is a program I use frequently. In general, it is a good program. However, it has quirks that drive me mad. Many of these quirks affect many users and could probably be fixed in a matter of hours. However, I don’t report these issues to Microsoft. Why is that?

I don’t think Microsoft will listen. I don’t think Microsoft is a bad company (you should be suspicious of anyone who claims it is). However, Microsoft is not a particularly open company, and they are enormous. Would my feature request or bug fix request ever get through to a person who would be able to evaluate it effectively? I doubt it. This isn’t necessarily Microsoft’s fault, though I’m sure they could improve both their responsiveness and my perception of their responsiveness. It may simply be that there are too many people like me with too much feedback to reasonably parse.

This is where smaller and more open software development processes can really shine. In the past few months, since Apple’s Safari web browser project was announced, there has been a lot of discussion and an enormous amount of feedback on the browser. I’m sure there was a lot of noise, but there has certainly been a lot of useful feedback.

Add to this an open and responsive developer, and things get interesting. David Hyatt was hired by Apple to work on the Safari project. He publishes a weblog on which he regularly posts progress, requests for input, and explanations of choices from the Safari project. Apple was brilliant to let him do this.

Take for example, a public exchange that took place between a Safari user (and web expert), Jeffrey Zeldman, and Safari developer David Hyatt. Zeldman posted a detailed comparison of his site on the Safari browser and the Mozilla/Gecko-based Chimera (now called Camino – good name choice, by the way). The next day, Hyatt posted a response explaining the reasoning behind the differences.

Hyatt was equally open and responsive to input when weblogger Mark Pilgrim posted his initial feedback on the Safari beta.

Hyatt is not alone in his process. Dave Winer (formerly) of Userland Software, Ben & Mena Trott of MoveableType, Brent Simmons of NetNewsWire, and many others all communicate and collaborate with the users of their software.

It is important to note that none of these projects are open source. Some, like Apple’s Safari project work with open source projects (the KHTML rendering engine, in the case of Safari), but they are not open source themselves.

This openness makes me want to use their software. It makes me confident that my input will be considered and my problems with an application might disappear in an upcoming version. I’ve seen desire for openness in software development drive software choices in a corporate development environment as well.

This is obviously not a large concern for all users. My parents are not interested in communicating with the developer of their browser – they just want their Hotmail. However, there is a large community of wannabe experts like me that collectively bear significant influence on choice of software (my parents are using Mozilla, because I installed it for them).

I would love to see other key Microsoft and Apple developers blogging (let’s see some key Microsoft and Apple executives blogging while we’re at it).

 

When is enough quality, enough quality?

When the MP3 format first appeared there were some who dismissed it because it was a step backwards in quality. This is true. Many MP3s are encoded at 128Kb/s at which certain slight differences can be discerned on a good pair of headphones.

Obviously, these people were wrong. The slight loss in quality was a small price to pay for the massive savings in file size. Taking a 30Mb WAV file and turning it into a 4Mb MP3 made it reasonable to move music across dial-up connections, and a breeze to move across higher-speed networks.

In the case of MP3s, the quality was enough. As storage and bandwidth becomes more available and affordable, this quality gap between CDs and MP3 will close (though higher-quality Mp3 encoding, and through new file formats).

What originally made me wonder about when enough quality is enough quality was a local phone call with a friend. We were having a quiet conversation and I listened to my friend’s voice through the tinny phone receiver speaker. It occurred to me that I had all of the equipment in front of me to do much higher quality audio communication. My PC has a 16-bit sound card and a net connection that can easily stream great sounding music. I have a great microphone too.

I want higher quality telephone audio. Why has our technology stopped at the current level? Perhaps it is a limitation of the infrastructure, but I doubt this (I’m downloading MP3s at 300Kb/s on my DSL connection on the same phone line). It’s certainly not the speaker and microphone technology that is limiting quality.

So, it’s not the technology, it’s us – the customers. We must not care enough to demand higher quality or pay for higher quality. Apparently, we are not willing to pay for it.

Note to telcos: I will pay for it (I’d gladly pay $100 for a new handset/receiver that gave higher fidelity audio).

HDTV is another example of meagre customer demand for higher quality. There are loads of factors that I don’t claim to understand (or even know about) that have stunted the adoption of HDTV. However, it seems to me that people don’t seem to care.

I’ve seen HDTV at the local FutureShop. It looks great – obviously a far greater picture than plain-old NTSC. Joe public, with his 19” TV watching Good Morning Regis & Stupid doesn’t need anymore quality. Regis looks just fine with XXX lines.

48 bit color? Not unless you are scanning for the cover of National Geographic. 32 bit audio – does it even exist? When is enough quality, enough quality?

 

Counting the dead in real time

In a world with over 6 billion people, there is bound to be a few thousand in the throes of some disaster (natural or otherwise) at any given time. Since news agencies thrive on breaking disaster news, it’s not uncommon to hear and see numbers of dead flying around like a stock ticker.

This week, three horrible events were covered extensively in the Canadian and American news: the ‘stampede’ in a Chicago nightclub, the subway fire in South Korea, and the plane crash in Iran. As each of these events unfolded, the number of dead reported on various websites, including CBC, MSNBC, CNN, and BBC, was contradictory. In some cases, the variations were quite significant.

For example, at around 8:30 PM Atlantic time, I took a screenshot of a Google News search for Iran air crash. See a screenshot of the results with the casualty estimates highlighted.

During the September 11 attacks in New York and Washington, the immediate speculation varies from a staggering 50,000 to (an also staggering) 5,000 (now somewhere between 3,000-4,000).

However, these were not reported as concrete numbers – nobody knew what was going on and we all knew it. Many of the numbers being reported in the screenshot above are not qualified as unconfirmed.

Is this kind of inaccuracy understandable? Is news being delivered before it is ‘ready’ (able to be confirmed), or is it better for news agencies to share what they know, when they know it?

 

Wacky Videogame Crossover

An article from Salon has a the headline: “Man steals six cars during chase“.

Now, I haven’t played Grand Theft Auto since the original, years ago. But anybody who’s had a taste of that game must have thought for a second, “Man, that guy’s living the dream”.

Except for the prison-part.

 

AoV’s Roving Reporter on Google/Blogger

Not wanting to be left out when something interesting happens, I emailed Google’s Press Center about their acquisition of Pyra (the company behind Blogger). I received a sparse, but prompt and polite reply from the Director of Corporate Communications including this statement:

“Google recently acquired Pyra Labs, developers of Blogger — a self-service weblog publishing tool used by more than one million people. We’re thrilled about the many synergies and future opportunities between our two companies. Blogs are a global self-publishing phenomenon that connect Internet users with dynamic, diverse points of view while also enabling comment and participation. In the coming weeks, we will report additional details. Blogger users can expect to see no immediate changes to the service.”

Often when webloggers are covering a developing news story, there is the tendency to wait for the commercial media outlets to get the official story. Why wait? Pick up the phone and call. You’ll probably just get the company line (like in this case), but it’s worth a shot.

Also, what took News.com a day and a half to catch up on this story? I still don’t see it on Salon.

 

Best Webcam Shot of the Year

When it gets this cold (-40 with the windchill in Charlottetown today), you have to talk about the weather.

From the IslandCam in downtown Charlottetown today:

today: boxers AND briefs

 

Universal Access to All Human Knowledge

Brewster Kahle speaking at the Library of CongressA link found from Matt Haughey’s a.wholelottanothing.org lead me to a talk by Brewster Kahle of the Internet Archive. His organization is working a variety of projects to make public domain content available in an “internet library”. Among these projects is the WayBack Machine, which archives the web.

The talk is part of a series at the Library of Congress and runs 1 hour and 26 minutes in RealVideo format. It is worth watching: Brewster Kahle: Public Access to Digital Materials (1hr 26min RealVideo).

Kahle’s basic idea is universal access to all human knowledge. Every book, speech, TV show, website, concert, etc. should be available to all of us. He looks at three main questions:

  • Should we? Yes!
  • Can we? Is it logistically possible from a technical and financial perspective? His answer: Yes.
  • May we? Will we be allowed to make all knowledge available under law? His answer: Yes.
  • Will we? He leaves this as an open question.

His numbers on the cost to digitize (scanning, etc.), store (disk space), and make available (bandwidth) all human knowledge are fascinating. According to Kahle, the hardware and labour costs required to make all book and all television and all music ever created available are not that difficult (within the hundreds of millions of dollars).

Taking books for example:

  • There are roughly 100,000,000 books ever created
  • The Library of Congress has about 26,000,000 books (I was impressed and amazed that the Library of Congress has 26% of all books ever created)
  • A book costs between $10 and $100 to acquire and digitize
  • A book takes up about 1Mb of space
  • 26,000,000 books would take 26 TeraBytes.
  • 1 TeraByte costs about $60,000
  • The entire Library of Congress could be stored for about $1.5 million dollars
  • Books can be printed, cut, and bound for $1/book from a mobile book printer (~$15,000)

If anyone has the right to make these claims – it would be Kahle – who’s organization is storing massive amounts of data as part of their WayBack project and other projects.