Musings on Business and Tech

Just Say No to Feature Creep: Xcode Edition

One of the hardest things for any software designer to do is to decide not to implement a feature. Many software projects have been delayed or even derailed by feature creep, or the tendency to widen the scope of a project during development. But in many cases, features that seem like “must-haves” during development can be deferred to later phases of development, or cut completely.

Perhaps the paradigmatic example of this is the original iPhone OS’s lack of cut, copy and paste. How could Apple have omitted such vital features? It didn’t seem to hurt sales of the iPhone though.

Today I just ran into another example, also from Apple. In Xcode, you can switch from a header file to its corresponding implementation file (and back) using the keyboard shortcut Command-Control-Arrow (any arrow). This is a really nice way of navigating back and forth while you’re creating new instance variables and methods for your classes. However, when you navigate in this way, the project browser at the left doesn’t update its highlight to indicate that you’re viewing a different file. Is this a bug? Probably not. It’s probably just the designers of Xcode deciding to rein in feature creep so that they can actually ship the product.

XCode and avoiding feature creep

It’s so damn tempting to want to make sure every little bug is fixed and every little corner case is accounted for before you release your software. But, as they say, perfect is the enemy of the good. It’s crucial to know when something is good enough so you can ship it as soon as possible. With cut, copy, and paste, Apple finally introduced the feature into its third version of the iPhone’s operating system. By then they had already sold millions of phones to customers who decided they could live without that crucial feature.

 

March 14, 2012   No Comments

Apple vs. Switzerland

Yesterday Apple’s market cap topped $500 billion. Staggering. Only 19 countries in the world have a bigger GDP than Apple’s market cap, possibly soon to be only 18:

Apple vs. Switzerland

March 1, 2012   No Comments

No, Graphic Designers Aren’t Ruining The Web

I woke up today to this provocative article in The Guardian about how graphic designers are ruining the web. Naughton’s main argument seems to be that graphic design adds unnecessary bulk to websites, wasting bandwidth. Naughton is absolutely right that page sizes have increased over the last two decades of the web’s existence. He is also right that this is a problem.

However, he describes the problem as a “waste of bandwidth.” Last I checked, “bandwidth” is an infinite resource (unless maybe you extrapolate bandwidth to barrels of oil). The bigger problem is that more elements on a page (and bigger individual elements) will slow down page load times and potentially be frustrating for the user. If Naughton is saying that people who make websites should work to reduce the number and size of the elements on their pages, I completely agree.

But it does not then follow that websites also need to be ugly (he uses Norvig.com as an example of an underdesigned site that is compelling for its content if not its look and feel). Highly-designed websites need not be bulky. Just because the BBC News page sends 165 resources on every request to its homepage, doesn’t mean all designed sites do. NPR.org is a lean and  mean website, requiring roughly 50% fewer requests than the BBC News. Yet I would say it offers a bit more of a user-friendly way to access information than Norvig’s site.

And we could improve things even more than that. We can combine and minify JavaScript and CSS files. We can reduce the number and sizes of images on each page. Many requests on big sites like these are to 3rd party tracking pixels and JavaScripts. How about we agree to pay for the services and content we use on the web so we don’t have to deal with all this bullshit marketing crap? Graphic Design is not the cause of all this bulk. Increased user access to bandwidth and marketers are more to blame.

I’ll agree that some underdesigned sites are excellent because they are underdesigned: Craigslist.org and (the original) Google.com. But if Apple has taught us anything over the past decade, it is that things can be designed without being complicated and bulky. And that is the direction I’d like to see the web going in. That way we get to have our cake and eat it too.

Enhanced by Zemanta

February 19, 2012   8 Comments

The 5 Worst Practices of the Mobile Web

My friend Michael McWatters tweeted his frustration today that there is no way change your Twitter password on their mobile site. I’ve butted up against this issue in the past, and the fact that you can’t even switch between the mobile and full site on their is immensely annoying (in fact, there isn’t even a footer on the site!).

With smartphone penetration growing ever higher, it’s increasingly important for companies not just to build mobile sites, but to build them well. Mobile sites can no longer play second fiddle to their desktop brethren. Over the past few months I’ve become increasingly sensitive to, and bugged by, the degree to which so many mobile sites are so badly implemented. With that in mind, here are my 5 “worst” practices of the mobile web.

  1. Don’t give users the choice of using the full site – not letting users choose to use the full site on their mobile device is presumptuous at best, and crippling at worst. Just because the screen is small doesn’t mean you don’t want to be able to access all of a site’s features in a pinch. On the iPhone anyway, browsing a full website is often very tolerable and should at least be an option for users. This is related to #2, which is…
  2. Don’t cripple your mobile site – while it may be true than on a dumb phone you likely do not need or want to access all of a site’s features on the go, on a smartphone you often do. A mobile site no longer needs to be a list of the 10 most visited pages on a site. Let’s start building mobile sites that allow access to some more advanced features like changing your password.
  3. Show an interstitial ad for your mobile app – have you ever clicked on a link on your phone only to be brought to an interstitial ad for a site’s mobile app, instead of the article you were trying to read? And of those times, how many times have you gone immediately to download the app instead of just closing out the ad and trying to read the article you were interested in?
  4. Don’t redirect from your mobile domain to the full site on a desktop browser – many sites with mobile domains will redirect you to it using browser detection. But many of those do not do the reverse redirect (i.e., visiting the mobile site on a desktop browser doesn’t redirect back to the full version). Being forced to view a mobile site on a desktop browser is torture.
  5. Redirect to your mobile domain, but not the specific page – all this redirecting has its place, but it’s so easy to get it wrong. On many occasions I have clicked on a link on my phone, gotten redirected to a mobile domain, and instead of it going to the article I was trying to read, I get placed on the homepage of the mobile site. So frustrating!

The mobile web is certainly in its infancy, but that’s no excuse for giving users such broken experiences. It’s 2011 and it’s imperative now that mobile sites are just as beautiful, simple, and elegant as the devices used to navigate them. If you have to choose between offering a mobile site that suffers from any of the worst practices listed above, and having no mobile site at all, choose the latter.

 

August 29, 2011   No Comments

This Piece of Technical Writing Has Been Written By Me

In my role as a business analyst at a software development shop I see a lot of technical writing, much of it terrible. For some reason, people whose job it is to be precise and logical often fail to do so when the language of expression is English, rather than Java. While the problems in technical writing are varied, the offense I most often see is overuse of the passive voice.

For those who don’t remember their junior high school grammar, passive voice is a grammatical construct in which the object of a sentence is repositioned as its subject. “Tom throws the ball” is active voice, while “The ball is thrown by Tom” is passive. The use of passive voice in itself is not grammatically incorrect, but it often weakens the clarity of the writing by obscuring who or what is doing the action in the sentence.

Technical writing is a veritable breeding ground for passive voice proliferation, in many cases because the actors in technical writing are not tangible. The actors are software code, or systems, or networks. My phone today popped up an alert that said, “The server cannot be reached.” Who exactly is the one not reaching the server? Is it the phone? Is it the app I was running? Is it me?

But just as a writer would avoid passive voice in “normal” English prose, so too should a technical writer avoid it in his work. Phrasing technical ideas in the passive voice dampens the agency of the thing doing the action, making it seem unfamiliar and disembodied. Technology does things. To render technology in the passive voice is to distort its power to create change.

This is especially evident when technical writing refers to error conditions, as in the case of the alert above. It’s almost as if the authors of the software were deflecting blame away from themselves with the message, “The server cannot be reached.” They could just as easily have said, “It’s not our fault that you can’t access this page. Talk to the dudes who run the server.” (People in IT love to blame the other guy, but that’s a story for a different post.)

It’s never that difficult to clean up language like this in one’s technical writing, but it often requires ascribing some degree of agency to to the technology. Instead of “The server cannot be reached,” one could write it as, “The application failed to reach the server,” or, “The application failed to connect to the server.” If English had a better indefinite subject pronoun, we could even write something like, “One cannot reach the server at this time.”

There are any number of solutions to the problem of passive voice in technical writing. The main thing is to be aware of the easy pitfall, and to think about technology more as an agent of change than as some hidden force behind the things we observe.

June 13, 2011   1 Comment

TV Zero

My family and I haven’t watched “TV” in weeks. Granted, we don’t have cable (we use rabbit ears and a digital-to-analog converter box), but that’s not really the reason we haven’t been watching. The real reason is that Netflix instant streaming has changed our lives.

With the sheer volume of quality content that Netflix has (as well as other online video sites like Hulu), we are now at the point where we don’t really need to watch actual television. We are getting close to a point I like to call “TV Zero.”

By “TV Zero,” I don’t mean turning off all your screens and moving to Montana. I simply mean disconnecting from television as we know it (scheduled programs grouped into broadcast networks). I truly believe that, no matter how much the cable companies and networks drag their feet over the next few years, it’s just a matter of time before all programming formerly available on cable or over-the-air broadcast will be available on the internet. The experience is so much better.

For one thing, video over the internet is truly demand-based. I can watch any episode I want, at any time I want. For another thing, finding content is far easier, and has far more potential, than the current model that cable tv uses. Netflix can recommend shows I may never have heard of, based on what it already knows about my consumption habits. The array of content available is also more vast–services like Netflix can offer back catalogs of content providers with much lower incremental cost than, say, a cable company. In fact if you think about it, it’s kind of shocking that after 15 years of the “commercial internet” we’re still only in the early stages of this.

And then there’s all the recent buzz about Apple making a “smart tv.” If the rumors are true (and I believe they are , for the good reasons outlined here), the acceleration of our culture toward “TV zero” could increase tremendously. The potential for disruption and innovation in this space is huge, and in my opinion inevitable, and there’s no company in a better position to lead this change than Apple. But if Apple won’t do it, then someone else will. (Amazon? Google?)

One thing is certain, though: the cable companies will not go down without a lot of kicking and screaming. Unless someone in their ranks realizes the inevitability of this change, and figures out a way to profit madly from it.

Related articles

Enhanced by Zemanta

April 18, 2011   2 Comments

Minimizing Agony, Maximizing Pageviews

On Wednesday at the Launch Conference, travel search engine Hipmunk presented a new mobile version of their web app. But that’s not what I want to talk about. I want to talk about Hipmunk’s general approach to solving the problem of airfare search, and how it might be applied to other problems.

The genius of Hipmunk is in their “agony” algorithm, grounded in the key insight that when people search for airfares, price and departure time are rarely the only considerations. What people really want to know is how agonizing the trip will be, measured as a combination of price, duration, and number of layovers. So Hipmunk sorts your search results by this “agony” score (in descending order of course). Simple. Brilliant. There’s so much agony in the world; what else could this model be applied to?

The first thing that jumps to mind is turn-by-turn directions. Most navigation apps provide routes that optimize for distance or time, and in some cases by real-time traffic patterns. But there are a lot of other factors than can contribute to one’s agony while driving. For instance, given the choice, I’d much rather drive a scenic route than an interstate, but likely only if the scenic route isn’t orders of magnitude more time-consuming. Or maybe I’d like to drive a route with better food options than Shoney’s and Roy Rogers. Transit directions could also benefit from applying this model. I’d much rather take a trip that involved a transfer if the two subways were less crowded than the one, provided the trip duration wasn’t significantly longer.

Another great application of the “agony” model would be a site that helped you decide whether or not buy something online or at a nearby store. The algorithm could factor in a combination of item cost, shipping cost, shipping duration and return policy of the online option, and compare it to the item cost and travel distance to a local store that carries the item, as well as the real-time availability of that item in the store’s inventory (Milo.com is working on this last problem).

Sorting by “agony” factor is a powerful idea, and one that is quickly letting Hipmunk soar to the top of the travel search business. What other problems could you apply this model to?

Related articles

Enhanced by Zemanta

February 25, 2011   No Comments

Mapping Transit Delays With Ushahidi

MTA Delays Crowdmap

I woke up at 5 this morning to the news that the two subway lines in my neighborhood were still not running, more than 36 hours after the “Boxing Day Blizzard” had begun in New York City. The question then was, well, what is running?

The awful MTA site wasn’t much help, especially with regard to the buses. It offered cryptic and non-specific messages like:

Due to ongoing snow related conditions, all MTA bus express services are running with system wide delays. There is no limited stop bus service in all boroughs.

There was no indication of how any one specific bus route was faring.

My neighborhood’s Yahoo group was abuzz with conversations about which trains and buses were and weren’t running, but the information was unorganized and freeform, because it was happening over email.

And while the MTA probably had a good internal grasp of which lines were having issues, they were not doing a good job of disseminating that information. Ideally their website should have been providing real-time geo-located incident reports so that any commuter could look at a map and quickly determine what the best route was to wherever they had to go. Even more ideal would have been, as my friend Michael McWatters suggested, a trip planner that could re-route you away from the suspended lines and to the freely-moving ones. But, at the very least, even just a little bit more detail would’ve been nice.

So this morning I thought to myself, this is exactly what crowdsourcing is good at. And I remembered hearing about a mapping tool called Ushahidi, which was put to good use during the crisis that followed the Haitian earthquake back in January. Indeed crisis mapping is a very powerful idea, and Ushahidi is leading the charge with their open source application that’s free to download and deploy, as well as with Crowdmap, their hosted version of Ushahidi that is also, surprisingly, free.

In the span of about an hour, I put up a site up using Crowdmap called mtadelays.crowdmap.com. I entered all the subway service changes from the MTA site, and told a few people about it. It got a little bit of Twitter buzz, but only one person submitted a report other than me. I think I was a little late to the game (I should’ve set it up on Sunday), but, it turns out, the tool also has a few shortcomings specific to this particular use case.

First, the tool was built for incidents whose geography is best described as a point (a latitude/longitude coordinate pair). But transit delays are best described as lines. When service is disrupted, there is a starting coordinate and ending coordinate for that event. Ushahidi had no good way of representing this, so I ended up just putting in two points for every incident. It was kind of a hack, and it also was misleading–the issue itself spanned an entire length of subway or bus line, not just the endpoints. I imagine that anyone who might’ve submitted a report would have run into the same issue, and I also suspect there are some incidents that are best described by a polygon instead of a point or a line.

Similar to the incidents being specified by a point in geographical space rather than a line, they are also represented by points in time rather than durations. Transit delays have a finite duration (even if the duration isn’t known up front). I would love if you could set incidents to expire after a certain amount of time (24 hours maybe?) rather than requiring an admin to go back in to the system and either edit or delete the incident. The overhead can be prohibitive.

Another issue is that it’s somewhat difficult to submit reports. You have to visit the website and submit a form. Actually that’s not really true–Ushahidi supports reporting via Twitter hashtags, SMS, or mobile apps (though there isn’t an iOS app yet). These are decent options, but you don’t really get the good geo-location data this way. It’s probably only a matter of time before the mobile options for Ushahidi reporting get really good, but for now it’s a bit clunky.

Despite these issues, though, it’s really interesting to see how far this kind of technology has come. It’s also interesting to think about how many different mature platforms Ushahidi is built on: there’s Linux, Apache, MySQL, PHP, the Google Maps API, the Twitter API, SMS, Email, RSS, probably many others. It’s pretty staggering when you think about it, and all I really had to do to set it up was press a “submit” button on a web page.

Even though the Haiti earthquake was a big moment in the spotlight for Ushahidi, I think we have yet to hear the last of them. They are building an amazing tool and I’m excited to see how it can evolve and continue to help communities deal with local crises and civic emergencies.

Related articles

Enhanced by Zemanta

December 28, 2010   1 Comment

A Decade Later, Are We In Another Tech Bubble?

Lots of people are buzzing lately that we’re in another “dotcom” bubble, roughly ten years after the last one. In mid-November, noted New York venture capitalist Fred Wilson described some “storm clouds” ahead for the tech investing space. He described what he sees as some unsustainable “talent” and “valuation” bubbles. This was around the time of the TechCrunch story about the engineer that Google gave $3.5 million to stick around.

Not too long after that, Jason Calacanis of Mahalo fame wrote a brilliant edition of his email newsletter in which he outlined four tech bubbles he sees right now: an angel bubble (similar to Wilson’s valuation bubble), a talent bubble, an incubator bubble (new firms cropping up to try and copy the successes of YCombinator and TechStars), and a stock market bubble.

And the frothy news just keeps on coming: Groupon this week allegedly turned down a $6 billion acquisition offer from Google (yes, that number has nine zeros and three commas in it [1]). Oh, and also, the Second Market value of Facebook is about $41 billion. That makes it #3 in the web space after Amazon and Google.

And, finally, there was this hilarious and depressing tweet going around yesterday from @ramparte:

But for me the proof was in two recent encounters with people decidedly not in the tech industry: my accountant and my banker. Each of them, upon learning what I do for a living, started talking to me about their tech business ideas. One was intriguing, one was, shall we say, vague, but everywhere I turn these days I feel like someone’s trying to pitch me on their idea for a social network, a mobile application, or whatever. And who am I? I’m a nobody. Can you imagine how many pitches people like Fred Wilson and Jason Calacanis get? It must be absurd. And in any case, what most of these folks don’t realize is that the idea is about 5% of a successful business. The remaining 95% is laser focus and nimble execution.

I feel lucky to be in technology right now–the economy is so crappy for almost everyone else. And that’s got to be one of the driving factors of this bubble right now. It’s one of the only healthy industries out there, and it’s attracting people who are disenchanted with whatever sick industry they happen to be in. Other driving factors of course are the recent explosive growth in mobile computing, the maturation of the web development space (frameworks like Ruby on Rails and Django that make web app development almost frictionless), and the rise of APIs and web services that allow vastly different sites to integrate their offerings.

It’s as if all the fishermen in the world have descended on one supremely awesome spot. A lot of people will catch a fish or two, some will catch enough that they’ll never have to fish again, but most won’t catch a thing.


[1] If anyone ever offers me $6 billion dollars for anything, please remind me not to turn them down.

Related articles

Enhanced by Zemanta

December 5, 2010   No Comments

The Next Phase is Not Web 3.0

O’Reilly Media’s Web 2.0 Summit, which took place over the last few days in San Francisco, got me thinking, why is the web still only in version 2.0?

Tim O’Reilly himself coined the phrase Web 2.0 back in 2004 for his first conference of the same name. It was defined by an evolution in front end technologies like AJAX and bubble letters, back-end technologies like web services and RSS feeds, and business models like crowdsourcing and software as a service.

So given that we’re 6 years in to Web 2.0, when will we get to Web 3.0? The answer is never. No one will ever start calling it Web 3.0. For one thing, it’s not catchy. Web 2.0 has a certain ring to it that Web 3.0 doesn’t. Also, I think it will be difficult for people to come to a consensus on when technologies have evolved enough to move to a new version number. Web 2.0 was coined by a single person. Web 3.0 would have to be more organic. We much more likely to describe future “versions” of the web in descriptive phrases rather than numbers.

Tim Berners-Lee has always been against this nomenclature anyway. His alternative to “Web 2.0″ was the “Read/Write Web,” because of the way in which users became empowered to contribute en masse to the data on the internet. And in 2006, when asked what Web 3.0 would be, he said that a component of it would be “The Semantic Web,” or “a web of data that can be processed directly and indirectly by machines.” In other words, a web in which the machines can glean meaning from the data, in addition to simply manipulating it.

But I would argue that we are already at the next evolution of the web, and yet it’s not about semantics. It’s about context. This new phase of the web has largely been catalyzed by two breakthroughs: advances in the power and reach of mobile computing, as well as what Mark Zuckerberg calls “the social graph.” Both of these lend not meaning but context to data, and that is a very powerful thing.

Mobile devices can contextualize data around locations, photos, video, and audio (among other things). And of course the social graph connects data to people. The “Internet of Things,” as it continues to grow, will increasingly connect data to objects (shall we call it the “object graph?”). Although context is a step in the direction of semantics, we are still a ways away from getting machines to the point where they can interpret meaning from this data.

Indeed the “web” isn’t even about machines anymore. What was once a network of machines connected by wires is now a network of people, places and things connected by context. There is a new network growing atop the old.

Perhaps the semantic web will come in version 4.0 (although we still won’t call it that). But I think the best characterization of the most recent evolution of the web is the “Contextual Web” (I am not the first to call it such). Twitter, Facebook, Foursquare, the iPhone, Android, and many other prominent technologies can fall under this term, and I think it best describes the current proliferation of mobile and social technology that is spawning so many new and interesting businesses.

Enhanced by Zemanta

November 18, 2010   2 Comments