Don't Ask -- Observe

What I don’t see discussed about the Customer Experience Improvement Program (or any related data collection tool) is the fact that it only gives you usage information about the subset of users who actually choose to use it. So you are making assumptions/inferences about all your other users when you use that data. Do you actually have a realistic idea of what percentage of your applications users are represented in that data set? Is there some commonality between the choice to submit usage data and the style of usage of the application? Very difficult questions to answer…

Just as an example, the company I work for (a large defense contractor) does not allow employees to submit this kind of data (its considered a security risk). Its turned off by default on the installs, and its blocked by the firewalls and proxy servers. So the everyday workday usage data for a very sizable group of users is not available.

Gather what data you can, but be very careful if you assume that data is actually representative of the mythical “average” user.

Cellular phone manufacturers could do wonders to learn from this article.

[ Warning: Blatant self-promotion. :slight_smile: ]

In a large application we’ve developed at the company where I work, we included an internal “Big Brother” mechanism that records every data modification that a user makes. (It’s actually nothing more than an internal audit trail table.) As a convenient side effect of that, it’s easy to discern what features they’re using and when.

We were really surprised when we discovered what features were actually being used as opposed to the ones that we thought were going to be used. Constant mining of that data reveals really surprising things about whether or not certain improvements to the system and new features are actually worth the investment. It also tells us which critical system features are being neglected, and we can then ask the users why they don’t use them (don’t work? too hard to use? too inaccessible? don’t know about them? back doors elsewhere in the system that need to be closed?) and address them.

Our users are notorious for not alerting us to error messages when they appear in the system. (It’s a Web application, written in ASP.NET, and when an unhandled exception occurs, we display a custom error page that begs them to report it to us, but they never do.) So, instead, we log all the unhandled exceptions to a separate database table, and combine that information with the audit trail to get meaningful data to pinpoint what happened.

So Jeff’s right. When you can’t ask them, observe them. Do it silently if you have to. A well-written silent observer won’t lie to you, and it will tell you a heck of a lot more and often in graphic detail, than you ever thought possible. It’s worth the time to code it.

I agree. The other interesting angle is for the developer to make it how he likes it, rather than what a user says they like. Then at least one person, the developer likes it, and at some people will probably agree with him.

This is how I believe Apple works. While they probably do thorough, objective usability testing, I suspect that in the end it’s done ‘Steve’s way’. It’s a good way, and many people like it.

If you are going to put statistics collection into your application, you might as well put in exception collection and automatic logging to your bugtracker.

Just don’t cripple the app by putting in logging at every step. We manage the tradeoff on web apps by putting events into an Application object as they happen, and at checkpoints we bulk write that to an event database. It is as much for auditing errors and actions as collecting usage statistics, but it works for that.

The only problem with this blog post is that it doesn’t address the issue of user’s competency changing over time. Novice users become advanced users and only then will use the advanced features. So even if someone is not using a feature from the start, it doesn’t mean that they won’t need/want that feature in the future. And people tend to buy based on what they “could eventually need” not just what they need now. We don’t want to experience buyer’s remorse. This is very true with software as well. How many of us buy the “Ultimate” or “Professional” version just because we don’t want to find out that we don’t have what we need in the future?

Some usability studies don’t take this phenomenon into consideration. It is simply too hard to track user’s progress over a long enough time for them to get comforatble with a product. Especially if you are going to put them “under the gun” to learn something as quickly as possible. The Microsoft way of logging information over time can track this and give key indicators about how users progress with the software.

I’m not disagreeing with the crux of the post. I’m just pointing out that usability testing is not as easy as just watching someone use your software/website for the first time. Actually observing (hopefully remotely) is really the only way to monitor how a user’s experience evolves over time.

Matt – most users tend to be “Perpetual Intermediates”:

http://www.codinghorror.com/blog/archives/000098.html

“Some of my best friends are users.” Heh. :slight_smile:

Nice article!

Regarding that example of mobile phone, I think, it is little bit a different scenario in software side. In software development, you can make a working prototype and send it to your users and get the feedback but in the case of phones you cannot do it. You have to rely on a small group of people to design the feature. I think this is the reason we see those useless or out of order screen options/messages. Most of the things get lost in translations including user’s requirements!

This is happening these days where companies are just launching half-cooked products i.e. betas and let users decide if it is ok or not. Are they making observations in these never ending beta cycles? GMail is still beta, I am wondering if Google is observing something and is not able to make any sense even after such a long period of time! :slight_smile:

The big issue I have with the Customer Yada Yada Program is selection bias. Sure, presumably they get enough corporate desktops and well-meaning Aunt Tillies that it washes out in the noise, but there’s a significant (though hardly majority) share of users that will automatically opt out of all such programs.

They don’t like the implication that, say, VS2005 setup sucks so bad that Microsoft needs help to find out all the ways it crashes, and they don’t trust the privacy policy on Office. It’s not just Microsoft in most cases, but they’re the most visible offender for many of these users.

The UI designer’s job is to make sure that features fit together. You get good marks for making something functional, attractive, and useable. You get extra bonus points for making something so fun and easy to use that it sells itself.

I think many consumers are warming up to well designed products with less than bloated feature sets, while products and software in the enterprise market are still addicted to feature-itis. Google search, the Wii, iPod, and presumably the iPhone were all hits with consumers because they had the RIGHT features in easy to use packages.

In the enterprise world you have Outlook, Blackberries, and god knows how many terrible and expensive software packages. This is because the people doing the buying are usually not the ones doing the using.

It’s like cars and people:

Guy goes into Chevy dealer. Guy looks at Vette.
Guy wants Vette.
Guy test drives Vette.

Wife comes back to dealer with Guy.
Guy shows wife Vette.
Wife not interested.
Wife wants Suburban.
Guy and Wife drive off with new Suburban.
Vette lures another guy into dealership.

People always want the sizzle as they say in sales, but when they are really hungry it’s still the meat and potatoes that they go for.

Did you understand all of the above? I’m glad because I sure don’t…
Really, now what does this have to do with complexity?
I’ll tell you, people want, sizzle.
They want excitement, to be ‘with it’, ahead of the crowd, in the groove, ahead of the curve, etc. so forth and so on.
In our technological wonder age we have convinced ourselves and everyone else that to be with it means that you must be technologically savvy. That means a blatant display of technological mastery. Which leads to complexity. Not that the complexity is necessary or even really wanted. It’s the perception, the sizzle…

By the way Jeff, can we please have a spell checker? I’m getting tired of having to switch to Word to check my spelling and grammar and it would be so nice if you could just put one in for use, please? Can we please have a spell checker, you know your site will really be with it if you have one and you’ll probably set off a stampede so that every site will have to have a spell checker, please Jeff, a spell checker…
And maybe one of those I’m anticipating you’re typing for you thingies, you know like in Excel where it types the number for you after the first couple of letters, please

If you ever start your own software development company, or are involved in a small company that works for other small companies (Read: No 500 page Use Cases) you will quickly find that the only design spec that you ever really get is “We don’t know what we want, but we will know it when we see it. I CAN tell you all of the things that we DON’T want, will that help?”

I think technically that counts as spyware. I have no objection to products that gather’s data strictly for the good of the product, but I would prefer full disclosure. I wonder how many people realize that MS Office collects data of any kind on their work habits.

I don’t use MS Office '03 (though I did grow up on '97) because it’s more $ than I care to spend on an office sweet so I can’t know for sure, but I wonder if it is possible to turn this function off, or at least monitor an control what it sends back. Is their any way for the end user to know, for certain and precisely what MS is monitoring? And how about other vendors that do this? Is it possible that functions like this present a security loophole? Hell, what about the bandwidth usage?

A lot of users set up proxies and firewalls to block this exact kind of activity on their home networks. Not all of this kind of activity is malicious but there is no way for a firewall to know which is which except by explicit permissions. So unless your app is able to break these firewalls and proxies, (isn’t that illegal?) they’re not likely to be as affective as you might hope.

So, I double checked and their is full disclosure it is fully voluntary. This is good; MS know’s where it’s priorities lay. I’m just wondering how many people are willing to submit to it though? With a webapp you have no choice: cookie, but here it could be a problem.

I imagine that the willingness of the user to submit this kind of info might be contingent on the kind of work he does. Which tools and features he uses most will also be contingent on this also. So, the data gathered can still be significantly flawed, allow the company to tailor their product to only a segment of their market-share and perhaps ignore the rest. I suspect the best way to gather data would be to simply use the product oneself, maybe beta-testing as well.

Hi Jeff,

Love the blog and loved this article in particular. I was writing a similar article but you scooped me. I’ve publiushed mine at the following link and have given you props just to be on the safe side

http://softarc.blogspot.com/2007/06/capability-vs-usability-what-your-user.html

Keep up the great work,

-Frank

good one!

“We know how much time you spend in the Calendar” - I hardly use the calendar, but not because it not useful, but because it isn’t comfortable to use…

no offense meant, but I am suprised at how simplistic this article is. Of course you can’t use “wish lists” to design new products. But observing users is extremely narrow at best. Users won’t say what say want, that’s granted, but there is so much more to it.
“what users say they will do, and what they actually do, are often two very different things” … doh. What about what they genuinely THINK they would do (and how wrong that also is)? What about what they would BUY even fully aware of how they would underuse it (or overuse in some cases)? What about what they would WANT to do if they knew it was possible? In turn, what portion of that would correspond to a genuine use when implemented? How many sales did you loose by NOT having a useless feature? Or even a useful one for that matter! How many sales did you loose or gain by choosing to observe customers instead of listening to your engineers? How is the decision to buy related to usability (or ease of use, feature set…)?
What about the multiple other sides of design?
Do you really think that MS-DOS (a really perfect example of wholy encompassing interface to a major device - and despite that crucial position, not quite an example of usability) was designed with usability in mind, or by a vision of the whole industry and what it would become?

Don’t ask. Don’t listen. Don’t observe.

Be smart. Reap the money.
Rinse and repeat.

“As designers, we define data points we’re interested in learning about and the software is instrumented to collect that data. All of the incoming data is then aggregated together on a huge server where people like me use it to help drive decisions.”

As software designers I have often thought we don’t mine our own application internals and data to learn more about how our system is used. Most applications are simply information graveyards. Users input lots of great info we could use to improve the product and we simply throw it in the bit bucket.

I am a Java developer and created an open source product called JAMon that I use to track aggregate application about performance, usage, errors, and because a developer can supply any String to JAMon indicating what the concept is that he wishes to track that means JAMon can really track anything.

It is time software developers started making our own shoes…

http://www.officepolitics.co.uk/cobblers_children.html

http://www.jamonapi.com