Musings on Business and Tech

Category — software architecture

Notifications, Unread Items and Information Overload

Last week I wrote about the strategies Quora.com employs to engage its users and keep them coming back to the site. A big component of their strategy is the idea of notifications–the email and on-screen alerts the application uses to let you know that your attention is needed. Their notifications are tactful and largely welcome.

Unfortunately, however, like many other tools in the software architect’s chest, notifications can quickly cause insane levels of information overload when they’re used without careful thought.

Take for instance the Facebook iPhone app. Every time I open it and navigate to the main menu screen, I have some notifications waiting for me (usually people commenting on one of my wall posts or something similar). I’m alerted to this fact by a little bar on the bottom of the screen highlighted in a different color. This much I’m okay with.

However, if I then choose to close the app at this point without explicitly viewing the notifications, the app icon now has a little red number superimposed on it, telling me how many notifications I didn’t check. If you’re anal like me, this is torture. I now have to go back into the app and view the notifications in order to get rid of that annoying little red number.

“Unread” counts in email and news readers like Google Reader are another good example. Again, because of my mild OCD, I never let my inbox contain any unread messages. I even click on messages I know to be spam just so that they don’t keep notifying me of their unread status. Same goes for Google Reader. If I’m too busy to read everything and I have to skip some articles, I still have to mark them as unread so I don’t have to see that notification anymore. I’ve often thought that these applications should archive (or mark as read) any unread messages automatically after a certain amount of time goes by. If I haven’t read an email in a few days, I’m probably not ever going to read it.

All of this information desperately begging for our attention leads to apathy at best and resentment at worst. It’s like the boy who cried wolf. Eventually we’re just going to tune it out.

I think the trick here is to think like the user before implementing things like this. Do I really want to receive more than one or two emails per day from a given application? Should notifications be persistent, or should they fade away over time? Should they be mandatory, requiring the user to take a certain action so that they go away? Or should they merely be indicative of an action that is optional? Should the notifications be opt-in or opt-out?

These are crucial decisions to make when creating software, decisions that could lead either to delight or disgust.

Related articles by Zemanta

Enhanced by Zemanta

July 1, 2010   No Comments

How to Avoid Structural Rot in Software

Interesting post yesterday over at DZone about why applications rot, and how to avoid it. The author, Kirk Knoernschild, argues that the best way to avoid rotting design is to reduce dependencies in the architecture. That’s kind of a circular argument: what’s the best way to avoid a system where a change in one place affects lots of other things? Why, reduce dependencies of course!

But since dependencies are impossible to avoid completely, this “solution” is incomplete at best. What we also need is to constantly remind ourselves to leave things in better shape than we found them.

It’s like fixing your house. Let’s say you want to redo your bathroom, but in the process of ripping out all of the old fixtures you discover that the joists in the floor were never built right. Do you just ignore the problem and put in all the new stuff and hope for the best? Or do you suck it up and absorb the cost and time impacts of replacing the joists? Well, if you want your house to last another 30 years, you do the latter. If you don’t give a shit, well then you need to get some integrity.

This isn’t easy. What is easy is to get caught up in deadlines and budgets and to cut corners and make exceptions. But short term cost and time cutting will almost always lead to long term headaches. If you make a point to always leave things just a tiny bit better than you found them, you will continually improve. If you don’t, then things will simply rot.

Read That Rotting Design | Javalobby.

December 23, 2009   No Comments

Warning: Your Project is a Werewolf

Via @stevereads, a great summary of a panel at the 2007 OOPSLA conference about what we can and can’t do to improve the software development process, what has changed over the last 20 years, and what hasn’t. The panel summary includes choice insights from Fred “Mythical Man Month” Brooks, and features Martin “Extreme Programming” Fowler pretending to be a werewolf.

There were two great quotes from the panel, one serious, one hilarious. Let’s get the hilarious one out of the way. It was from the werewolf:

Fortunately, it is hard work to manage a team and to focus on people’s interactions. It is much easier to fiddle your thumbs and play with Microsoft project. Every time somebody creates a PERT chart or a Gantt chart, I get to eat an extra kitten.

And now the serious one, from Linda Northrop, Director of the Research, Technology, and System Solutions Program at Carnegie Mellon’s Software Engineering Institute:

To wrestle future werewolves we still need great designers, and I think we still have far too few, and we still need to cultivate an atmosphere of hard work but also an inter-disciplinary perspective that takes us uncomfortably out of our coding world. Our world of what, if I could just borrow a phrase from this morning’s keynote, “the technology push”. I think the reason we have the technology push is because we don’t do the hard work to understand the needs of who we are trying to address.

InfoQ: No Silver Bullet Reloaded Retrospective OOPSLA Panel Summary.

December 3, 2009   No Comments

The Primacy of Data

When we create software we tend to think about features first and data second. We want to know what the software does and what it looks like, but for some reason we don’t pay enough attention up front to the information underpinning the application.

As the sheer volume of information available to us expands exponentially, this tack will be increasingly backwards. Not that features and interface won’t matter at all, but we will have to reverse our approach. Instead of listing a set of functional requirements and building them around the data, we will need to look at the information first, and then decide what we want to do with it. Those firms that can find innovative ways to unveil the emergent properties of their data will have significant competitive advantage over those that can’t.

December 1, 2009   No Comments

Software Development as a Series of Debits and Credits

In his chapter in Beautiful Architecture, Pete Goodliffe talks about “managing technical debt” as a goal of the software development process. What he’s referring to are the “loans” you take from your system when you make last-minute, quick-and-dirty fixes in order to reduce risk or deliver on time. Often, “last-minute” and “quick-and-dirty” are bad words in the development process, but in many cases it’s the only choice you have–as long as you recognize that every time you cut a corner you are creating a debt for yourself in the form of future architectural revisions.

This is a very cool way of looking at these kinds of “bad decisions.” It empowers you to make such decisions under the right circumstances. It gives you a framework to understand how those decisions can work to leverage your scarcities in the present (time, money) by amortizing those costs over time. It also made me think of the flip-side of this metaphor: that carefully-planned software architecture is actually a credit on your account. Every time you take the time (and money) to build something the right way, it’s as if you made a deposit that you can draw on later. It’s also an investment, as time spent building things the right way now will pay dividends down the road that far surpass the initial effort.

November 30, 2009   No Comments

Developer-centered Design

It is common practice to design software with the end user in mind. We call this user-centered design. Its primary concern is to create a system for end users that contains a kind of conceptual continuity, mitigating the pain of the unfamiliar. While it is no doubt crucial to consider the needs of the end user when designing a system, we also need to think about those of another type of user: the developer.

After all, if a system doesn’t contain some form of conceptual continuity in the way that it’s built, developers will find it increasingly difficult to grow and maintain it. And because it will be difficult, they won’t enjoy the process, resorting to quick and dirty solutions to avoid the fatigue that sets in when grappling with conceptual cacophony.

I would venture to guess that our brains release endorphins when they sense the patterns inherent in a conceptually continuous system. We like when we try new things and they work because they’re similar to things we’ve done before. Once you master sending and receiving emails, it’s not difficult to grasp the concept of replying and forwarding. Developing a system is (or should be) the same way; it should be clear how to create something new based on what you already have.

Developers and end users alike shouldn’t have to “Read The Fucking Manual.” You don’t learn how to use software by reading about it; you learn by using it. And if it’s a pain in the ass to use it, whether externally as an end user, or internally as a developer (or, worst case, both), people will just give up.

November 25, 2009   No Comments

Executive Decisions

We were in DC a couple of weeks ago visiting with some friends in the area, one of whom is a landscape architect. I’m always curious to learn more about architecture, though I know next to nothing about it. In particular I’m interested in the overlaps between the architecture of buildings and landscapes and the architecture of software systems.

I asked my friend how much of his designs cover all the little corner cases that may come up in the building process versus how much needs to be tweaked once the builders are working. He told me that a lot of the design is guesswork and the builders (developers in my world) need to make a lot of small decisions about the implementation once they’re in the trenches. I asked how often they come back to the architect to confirm whether their decisions are correct. “Not very,” he said. It’s all about the architects trusting the builders to make these kinds of choices on their own, and the builders trusting themselves to do it right.

This is very similar to the software world. You can’t possibly think of everything in the design phase, so the developers end up making lots of small executive decisions about implementation along the way. And there just isn’t time for them to come back to the architect (or the client) for every little thing. That having been said, if there are some major gaps or flaws in the design, the developers do need to check in with the designers (and the designers with the client) before they proceed. Knowing where the line is between small executive decisions and big is what separates the good builders from the bad.

November 19, 2009   No Comments

Generalization vs. Specialization

One of the more difficult tasks in software design is striking a balance between creating a general solution and a more specific one. Should we build something that can handle all sorts of hypothetical future requirements or one that solves a specific problem, but may not work for anything else down the road? Should we take longer and spend more up front but reap dividends down the road? Or should we go quick and dirty now and “cross that bridge when we get to it?”

For example, suppose you were building a web app that facilitated introductions over email. Jane wants to introduce John and Tom. Jane visits a website and enters John and Tom’s emails into a form. The application then sends emails to John and Tom saying that Jane would like to introduce them. John and Tom can click on a link in the email to confirm they would like to be introduced, and the application sends the two of them one another’s email addresses.

Thinking about the data model for this application, you reason that there should be a table that stores each introduction, with fields for Jane, John and Tom’s email addresses. But what if, down the road, you might want the application to support introductions for arbitrary numbers of people, rather than just two? Or what if you may want to handle not just email addresses, but phone numbers, mailing addresses, etc. Should you build the data model now to support these potential features, or should you wait until you need them to implement them?

On the positive side, building a robust data model now would enable you to quickly and easily build new features later; on the negative side, if, for lack of demand, you end up never implementing those features, well then you’ve wasted your time. On the other hand if you only build those things you need right now, over time you will end up with a tangle of spaghetti code.

So what’s the answer? Well, like many other things in life, it depends. It is really a case by case question, and that’s what makes it difficult. There are a few factors to weigh in making this decision:

  • What’s the real likelihood of implementing this feature down the road? In the example above I would argue that introducing more than two people to one another at a time is a corner case and therefore not a likely future feature. On the other hand supporting phone numbers and mailing address has a high likelihood.
  • What’s the cost of generalizing now? If the present cost of generalizing is just way too high (either in time or money), then you have no choice but to put it off for later.
  • Have we been here before? If you find yourself designing something that’s eerily similar to something you’ve done before, and you can foresee having to do it again, you should take the time to generalize now.

I find that many good software engineers fall into the trap of over-generalizing, because that is what they were taught to do in college, and so everything they do just ends up taking forever. On the flip-side, bad coders never plan for the future and just keep heaping crap on top of crap as they go. So keep in mind the guidelines above next time you have to make this trade-off. And also keep in mind that this trade-off is happening all the time in software design, so if you’re not thinking about it, you’re either wasting money or building crummy code.

November 17, 2009   No Comments