Armchair Engineering: Apple should go High-Res

A dumb mockup of an iMac with a giant screen - even though screen size isn’t really what I’m talking about here.I have some unsolicited advice from an armchair engineer to Apple.

Microsoft is planning for the eventual advent of much higher-resolution LCD panels that we currently have. Their next major operating system release will be entirely vector-based, not tied to the pixel or any specific resolution. However, Microsoft’s next release isn’t scheduled until 2006.

Apple already has a vector-based resolution independent user interface that has been maturing for several years. Apple also has control over the hardware (something Microsoft is starting to get right with their Athens PC specs).

Clearly, price is the hurdle in delivery higher resolutions LCD panels – but Apple sells a high-end product to a market that is willing to pay a premium. I think I might be persuaded to buy an Apple if it came with a 17” or 18” display with a 3200 by 1800 pixel resolution (or higher). At that point, text starts to become as readable on screen as it is on the printed page.

One issue that they would have to deal with if they did jump to a resolution like that is those elements that are inevitably pixel-based. While the primary user interface controls are scalable, some applications would surely have some pixel-based elements implemented.

The most important pixel-based element would be the web. While good CSS and web fonts would thrive in a high-res environment, our trusty GIFs, PNGs, JPEGs, and any pixel-specified fonts or CSS elements would be minuscule. On a 200dpi screen 12-pixel Times New Roman would be less than 1/16th-inch tall.

Perhaps a high-resolution aware web browser could scale the page elements up to a reasonable size. Of course, quality would suffer but if your resolution takes a large enough jump, you could double the size of web graphics and things would look at least as good as they do on our 2003 screens.

I could hack together my own setup right now. IBM is selling a 22” LCD with a native resolution of 3840×2400 ($7,500USD as of this writing). A graphics card (or several graphics cards) to power that kind of resolution would also cost a premium. Even then, I’d be stuck running operating systems that might let me scale the font size up, but a typical website (800 pixels wide) would only be about 4½” wide. I’d also have to get a much better digital camera.

Apple is in a position to pull together the hardware (LCD and graphics card) and software (OS X with Quartz + website magnification in their own Safari browser). If they could pull it off for an anywhere reasonable price (maybe $6,000 for a computer and screen), they would take a giant leap ahead of any other platform.

 

Paying for Fewer Features

I’ve heard many people complain that Apple is charging too much (or that they shouldn’t be charging at all) for the “point releases” (10.2, 10.3, etc.). I completely sympathize with these complaints. The last two point-releases, 10.2 and 10.3, each costs $129 US ($179 Canadian).

However, I have to stick to the position that I’ve long since taken on software upgrades and pricing. I would love it Adobe would stop cramming crap new features into Photoshop and Illustrator. Rather, I would like them to spend a few months without adding any new features and just make everything better. Take the startup time down a few seconds, subtly refine the UI, fix bugs. I would pay.

This is what Apple has done with OS X 10.3. There are some new features, but the important improvements are subtle and all over the place. The end result is that the system just feels better. That’s worth the money for me. I wish more developers would focus on solidifying and simplifying rather adding more and more features.

My favourite part of the design process is the latter stages, even post-launch/release, when a slew of tiny improvements that seem independently insignificant add up for make the end result seem more mature and mysteriously better. For example, I would love to see the subtle changes that Matthew Haughey has suggested for the new A List Apart site implemented.

In the early stages of development of Mozilla Firebird there was a rule that each release (and there were frequent releases) had to be smaller (file size of the download) than the previous release. This forced the developers to keep simplicty and efficiency in mind and encouraged the optimization of existing features as much as adding new features.

I think where this gets lost is when the marketing department (curse them!) starts to get control over the feature list. I’m convinced that Microsoft Office changes its “skin” with each version just to look like something new and worth paying for.

 

Do we all need a personal system administrator?

My family has embraced the home computer. They use Hotmail to keep in touch with relatives. They use a scanner, despite absolutely terrible software that came with it (Canon). They use MSN Messenger to chat with friends (a lot). They use Microsoft Word to write papers, letters, and memos and print them off on an Epson printer ink-jet.

The trouble is, every few weeks, they’re Windows XP computer becomes overrun with spy-ware, viruses, and general crap. A knowledgeable friend told me that if you put a plain-old Windows XP, unprotected, directly on the Internet, it will be compromised in hours. I thought he was exaggerating. After another visit to my parent’s computer, I know that he is not.

They have pop-up windows coming up when you don’t even have a web browser running (some spy-ware app). I ran a slew of anti-virus and anti-spyware apps and discovered hundreds of unwanted apps and files.

The trouble is, I would rather dig ditches in the hot sun than do tech support. I am terrible at it. My girlfriend tells me that it uncovers an ugly and angry side of me. I have no patience. I find doing tech support more stressful than almost anything else in life. It is a massive personality/character flaw of mine.

So, I’ve come to the conclusion that my parents do need someone to help them with their computer, and that I’m not sure I can do it. So what do I do? I thought about buying them an iBook (or eMac). That would solve a lot of the spyware/virus issues. However, I’m afraid it would uncover a whole slew of new issues. They would have to learn a new OS – not matter how good it is. I would be less able to help them, as I’m less familiar with OS X than I am with Windows.

I wish I could give them a simple locked-down system with a word-processor and web-browser, and not let them (or anyone) install anything else. I could probably do this with Linux, but that would be a whole new can of worms – and I’m not really qualified.

They are willing to pay someone else to help, but I have no-where to point them. Most tech support at local computer firms is too expensive and the people can be clueless.

Surely I’m not the only reluctant-relative-system-administrator (while talking with Stephen DesRoches about this, he enthusiastically agreed). What can we do to make this easier (for me and my parents)? Help!

My plan for now is to block of Saturday afternoon and re-format their machine, put it behind a hardware router (as a firewall), and hope it doesn’t happen again.

 

Branding Mozilla: Towards Mozilla 2.0

Mozilla LizardI’ve been using and enjoying the products of the Mozilla project more and more lately. I’ve been hooked on Mozilla Firebird for a while, and my recent Mozilla Thunderbird theme was my first real contribution (if you could call it that) to the movement.

I’m very interested in the success of the project, and so I have written a short article outline some recommendations and ideas for branding Mozilla.

For those too lazy/busy to read the article (or those who understandably might value their time more than my words and ideas), here’s the 10-second version:

The Mozilla Project should adopt a simple, strong, consistent visual identity for the Mozilla products including consistent icons across applications that mesh with the host operating system.

Read the article in full and please feel free to comment in reply to this post.

Branding Mozilla: Towards Mozilla 2.0

Recommendations for the branding and visual identity of the Mozilla Foundation’s product and project line.
 

XUL: How I learned to love non-native GUIs

I hate skins and I love native GUI widgets. Microsoft and Apple had a relatively strong set of user interface controls that people are familiar with. Yet loads of developers seem keen on reinventing these devices from scatch. Media players seem to be particularly bad at this. Microsoft’s own Windows Media Player and Apple’s own QuickTime Player both seem to throw out the entire GUI toolkit and start from scratch, building totally confusing (and totally x-Treme) interfaces.

A good user interface should get out of the way. Particularly an interface for an application that delivers content (show us the content, and get out of the way). If you want totally x-treme, you don’t upgrade your media player skin – you download Limp Biskit videos.

When the Mozilla project moved to create their own cross-platform GUI toolkit, many people who were concerned with the user experience cried foul (including myself in this January 2001 rant about skins, and this August 2002 update). The idea struck me as the result of developer-centric thinking and complete disregard to the end-user. A cross-platform GUI would make things much easier for the developers, and a bit easier for the minority of users who work on multiple operating systems, but it isn’t much good for Joe-Windows-User.

Here we are, three years later, and I think they may have been onto something. First, let me be clear that you should always use the available operating system native GUI widgets when you can. Chances are the alternatives you will develop will suck.

Mitch Kapor’s Open Source Application Foundation has chose to stick with native GUIs on their Chandler application. This is a smart decision. Most skinned application suck. Trillian would be a great instant messaging program if it weren’t for the over-skinning (that said, I do use Trillian as I find it to be the best of the worst).

Put the UI in my hands

Enter XUL (pronounced “zool”). XUL is the XML User-interface Language developed by the Mozilla project. It allows developers to define the user-interface of their application using a combination of “off-the-shelf” standards (primarily CSS and JavaScript). The end result is a relatively accessible architecture for interface design.

XUL does to the user-interface what View Source did to web development. Though I have the advantage of a web-development background (familiarity with the key technologies, like CSS), I can do things to a XUL-based app that I could not do on other applications. For example, I found some of the icons on the toolbar of the Mozilla Firebird browser to be poorly designed – so I replaced them (this was as easy as saving new PNG files and dropping them into a .JAR file). The search box was too narrow – so I made it wider (this was absurdly easy).

These may seem like insignificant examples, but it opens up a level of control over the application that is not possible with other design methods.

It’s about quality and consistency

Another key development that has softened my stance of Mozilla’s break from the OS native interface is quality and consistency. Microsoft has been squandering their lead. The quality and consistency of the Windows GUI has been deteriorating rather than improving. Microsoft has also devalued their native Windows interface by allowing it’s own applications, including their media player and the massively important Office suite, to shirk the standards.

XUL, meanwhile, has gotten much better. The software is constantly improving (and incorporating native OS widgets where appropriate – which is a nice touch). Performance is no longer an issue on newer hardware. Where XUL-based apps might have taken a performance hit compared to native Windows apps, the difference is insignificant on recent hardware.

The Mozilla Firebird browser project shows that XUL can be used to develop a quality interface.

Will the operating systems rise again?

I should note that Apple’s OS X may be an anomaly in that the quality has actually been improved over previous versions of the OS. Regardless of your opinion on the style of the OS, it is clearly well rendered. Microsoft is planning on following in Apple’s footsteps by having DirectX handle the desktop GUI in the same way Apple is using OpenGL. There is a chance here that Apple and Microsoft may leap ahead of other interface systems. However, I think it’s more likely that developers of XUL will tap into these improvements than be left behind.

Platform Freedom and Platform Friction

In 1997, Netscape’s Marc Andresen claimed that:

“[b]rowsers will reduce Windows to an unimportant collection of slightly buggy device drivers”

If anything, it is XUL that has the power to do this. I originally thought that the ability to develop applications that run on multiple operating systems with relative ease wasn’t much good, since the overwhelming majority of computer users are running only Microsoft Windows. However, after my recent stint on a Mac for a week and having switched from IE to Mozilla Firebird as my primary web browser, I’m starting to see something more significant going on.

A friend of mine pointed out that for the average novice Windows user to switch to an alternative operating system, they would also be forced to deal with an entire new set of applications. However, if Mozilla Firebird becomes your primary web browser and OpenOffice.org becomes your primary office suite, the platform friction is greatly reduced, as these applications are available on other operating systems. Web-based services and applications are another layer that can work well on any OS. Once you’ve got Johnny windows user into his Hotmail account, he doesn’t care if he’s running Windows 98, OS X, KDE, or Gnome.

But I still hate skins

I do still hate skins. I dream of a simple media player that uses native GUI controls. I hate that every new version of Microsoft Office includes redesigned menus and toolbars. However, in the accessibility of XUL, I see a small example of how the wall between developers and users can be torn down.

More info on XUL:

 

Mozilla Firebird v0.6: I have a new default web browser

I use a lot of web browsers. I have six different browsers installed on my primary computer, and maybe ten more on other testing machines.

Of all of these, there is one primary browser. When I click on a link in an email or instant message, my primary browser will open it.

Years ago, Netscape 4 was my primary browser. Then, along came Internet Explorer 4, which was dramatically better than Netscape 4. In early 1998, IE4 became my primary web browser. Since then, it has been all IE – including version 5, 5.5, and up until today, 6.

There are other great browsers. Mozilla has had a great browser since before version 1.0. I used it regularly (the standards compliant rendering engine was great for testing web development work). It wasn’t enough to get to switch over entirely, though.

Then along came Phoenix. The browser started as a lean off-shoot of the Mozilla project. It became a great browser very fast. I started using it more and more with the version 0.5 beta release a few months ago. I really got hooked on the joys of using open-source software when a feature request I made was answered by a developer with a patch that same day. Still, Phoenix was in the relatively early beta stages and had some key features missing, incomplete, or broken.

Phoenix has been renamed Mozilla Firebird. The Mozilla project has announced that they will be making Mozilla Firebird the primary Mozilla browser (which means Netscape 8 could be based on Firebird, if that even matters anymore). Today, with the release of beta 0.6, Phoenix-come-Firebird is stable enough that I have made it my primary web browser, and I will secretly install in on my parents computer.

screenshot of Mozilla Firebird default browser setting

I have a few recommendations for anyone trying out this browser. The core browser is kept as clean and simple as possible (about a 6MB download) and additional functionality is handled through a nice extensions system (as opposed to just pilling everyones favourite feature into the core).

 

What Microsoft is getting right in Athens

Microsoft’ Athens PC prototypeMicrosoft and Hewlett-Packard are developing a concept computer code-named Athens to demonstrate technologies to be included in future products (serveral high-res photos are available). There isn’t much in this prototype that’s actually new and it’s more of a marketing tool than a technology demo. However, having putting a lot of simple and smart features together, they are on to something.

  • Screen real-estate
    The demo model includes a $2000+ 20″ LCD (photo). The presented awknowledges that this is unrealistically expensive for the average business or home user, but cites a report that predicts that the same size screen will run about $400 in 2004 (presumably the end of 2004). Jakob Nielsen has been telling us this for years: large screens are not just for designers. There is a significant productivity gain from moving a typical user from a 15″ to a 17″ or 19″ display.
  • Presence
    A Microsoft researcher commented that a urinal can know when you walk away, but your PC can’t. I’ve written before about how user-presence should be done at the system (rather than application) level. This demo includes some nice moves on that front. Say I’m showing a small group a video or presentation on my PC. On hardware button (Do Not Disturb), and all of my communication systems are notified to leave me alone (incoming phone calls are diverted to voice mail, though callerID is available, and instant messaging status is set to “away”).
  • PC as docking station
    I’m now a full time laptop user, though I do plug in a mouse when at my desk. The Athens prototype is a docking station for a laptop as well as a PC.
  • Unified Inbox
    With telephony (the most hilarious word ever) technology built into the system, voicemail and caller ID can be easily incorporated into the universal inbox along with email and instant messaging.
  • Smart sleep
    Many systems do this well already, but the Athens PC ads a few nice touches, including fast wake-up and light indicators on the top of the screen that alert you of email, voicemail or alerts, even when the system is asleep (photo).
  • Hardware controls
    Many try and most fail to add useful hardware buttons to PCs and laptops. My ThinkPad has useful volume controls that let me easily mute or adjust my volume regardless of what application I’m using. If the phone rings while I’m playing I can always quickly kill the sound (though the Athens PC does this for you). The Athens PC includes hardware controls for presence (Do Not Disturb, etc.) and hardware indicators for email, voicemail, and reminders (as lights on the top of the screen). Microsoft’s enormous size could help them establish a simple convention that hardware manufacturers could support. After all, these people did add a new key to our keyboards.

For more details on the Athens PC, Microsoft has a Word Document with all of the details (695Kb). They are really hitting on some key points, demanding “Appliance-Like Availability” and quiet operation. Microsoft is in a strong position to dictate these types of moves to hardware manufacturers they have proven themselves in a similar program. In 2000, I wrote about Microsoft’s similar announcement of the TabletPC. Two years later, you could buy one. They did the same with their Media Center PC.

News.com has video of the Athens introduction (link only works in Internet Explorer thanks to News.com’s craptacular pop-up window video display).

Spare me the line about how your computer did some of these things back in 1995. I’m not claiming that these are innovative. Rather, it’s the combination of a lot of smart features (regardless of how innovative) that make this an attractive PC.

 

The gap between User and Programmer

A sharp distinction is often drawn between “using a computer” and “programming”. Using a computer might involve a simple action, like opening a web browser and reading the news. Programming might involved something like creating the web browser itself.

These seem like two different worlds. Clearly there is a significant difference in accessibility and learning curve. However, I wonder if this difference is only one of scale and magnitude or is there some more profound separation between using a computer and programming a computer.

We occasionally see features in mainstream applications that put the ordinary user in a role closer to that of a programmer.

  • Macros in word processors, for example, are a (sometimes) simple programming language. A spreadsheet program like Microsoft Excel can be seen as an entire development platform alone (I have a theory that if we could have somehow invented Excel 50 years ago, we’d be living on Mars and we’d have a cure for cancer).
  • Rules in email program. Outlook or other email programs allow users to setup simple conditionals to file or perform other actions on email. This relatively simple set of rules turns out to be quite powerful.

I wonder if we may be pushing this distinction too far. I’m not suggesting that my parents should be learning C++. They are quite content with email, web browsing, and word processing. Still, I worry about the unforeseen repercussions that may arise from the idea we have burned into our minds that programmers are programmers, users are users, and never the twain shall meet.

Are we building our systems in such a way that re-enforces the gap between users and programmers? Are we building systems that have a bias that makes it difficult or impossible for users to create their own functionality?

I don’t expect all Joe Hotmail to start writing device drivers. However, there are technologies that have been biased in the other direction: where the user is often the creator. HTML, for example was a simple-enough language that, even if it didn’t make it all the way to the average end user, it did enable a far larger circle than “programmers” to do interesting and creative things.

DOS batch files were simple enough that a lot of users who would not consider themselves to be “programmers” could do all kinds of powerful things (though it may have been out of cruel necessity). I remember an old high-school friend wrote a simple batch file that ran off a floppy disk on our school network. It displayed a prompt that looked exactly like the real network login screen. When an unsuspecting user typed in their username and password, it would output a realistic looking error message, and write the username/password to the disk (a “floppy-in-the-middle“ attack?). The confused user would move on to another machine, and my friend would come collect his disk full of network passwords at the end of the day. The key point is that this was possible with very little understanding of “programming”.

Here are a few tasks that someone might want to perform:

  • Notify me when the Canadian/American currency exchange rate drops below a certain points. How would you do this? Excel can grab the exchange rate from a variety of web sources. Now we need a way to check this regularly and pass on the info to a notification of some kind (IM or email).
  • Take all of those emails I get from my old-school co-worker with WordPerfect 5.1 attachments and convert them into Rich Text Format documents. How would you do this? Get all WordPerfect attachments from that particular sender, open them, and save them as RTF files with the same name in a destination directory.
  • Have the top 5 New York Times front page stories print off at 6am.

These are a few examples of relatively simple tasks that would be difficult or impossible for most people to setup. Is it possible to put this kind of power in the hands of novice computer users? The individual components of these example tasks are all relatively simple with common applications. It’s the glue to tie them together that’s too difficult.

Is this a case of professionals conspiring to keep common-folk from trampling their sacred realm, or is it just hard to do well?

On a related note, Alan Kay gave a talk at the recent Emerging Tech conference that included video of children “programming” simple physics behaviour for learning purposes (QuickTime video of the presentation is available).

UPDATE: Soon after I made this post, I came across Matt Jones’ post on the topic.