On Wednesday at the Launch Conference, travel search engine Hipmunk presented a new mobile version of their web app. But that’s not what I want to talk about. I want to talk about Hipmunk’s general approach to solving the problem of airfare search, and how it might be applied to other problems.
The genius of Hipmunk is in their “agony” algorithm, grounded in the key insight that when people search for airfares, price and departure time are rarely the only considerations. What people really want to know is how agonizing the trip will be, measured as a combination of price, duration, and number of layovers. So Hipmunk sorts your search results by this “agony” score (in descending order of course). Simple. Brilliant. There’s so much agony in the world; what else could this model be applied to?
The first thing that jumps to mind is turn-by-turn directions. Most navigation apps provide routes that optimize for distance or time, and in some cases by real-time traffic patterns. But there are a lot of other factors than can contribute to one’s agony while driving. For instance, given the choice, I’d much rather drive a scenic route than an interstate, but likely only if the scenic route isn’t orders of magnitude more time-consuming. Or maybe I’d like to drive a route with better food options than Shoney’s and Roy Rogers. Transit directions could also benefit from applying this model. I’d much rather take a trip that involved a transfer if the two subways were less crowded than the one, provided the trip duration wasn’t significantly longer.
Another great application of the “agony” model would be a site that helped you decide whether or not buy something online or at a nearby store. The algorithm could factor in a combination of item cost, shipping cost, shipping duration and return policy of the online option, and compare it to the item cost and travel distance to a local store that carries the item, as well as the real-time availability of that item in the store’s inventory (Milo.com is working on this last problem).
Sorting by “agony” factor is a powerful idea, and one that is quickly letting Hipmunk soar to the top of the travel search business. What other problems could you apply this model to?
February 25, 2011 No Comments
I woke up at 5 this morning to the news that the two subway lines in my neighborhood were still not running, more than 36 hours after the “Boxing Day Blizzard” had begun in New York City. The question then was, well, what is running?
Due to ongoing snow related conditions, all MTA bus express services are running with system wide delays. There is no limited stop bus service in all boroughs.
There was no indication of how any one specific bus route was faring.
My neighborhood’s Yahoo group was abuzz with conversations about which trains and buses were and weren’t running, but the information was unorganized and freeform, because it was happening over email.
And while the MTA probably had a good internal grasp of which lines were having issues, they were not doing a good job of disseminating that information. Ideally their website should have been providing real-time geo-located incident reports so that any commuter could look at a map and quickly determine what the best route was to wherever they had to go. Even more ideal would have been, as my friend Michael McWatters suggested, a trip planner that could re-route you away from the suspended lines and to the freely-moving ones. But, at the very least, even just a little bit more detail would’ve been nice.
So this morning I thought to myself, this is exactly what crowdsourcing is good at. And I remembered hearing about a mapping tool called Ushahidi, which was put to good use during the crisis that followed the Haitian earthquake back in January. Indeed crisis mapping is a very powerful idea, and Ushahidi is leading the charge with their open source application that’s free to download and deploy, as well as with Crowdmap, their hosted version of Ushahidi that is also, surprisingly, free.
In the span of about an hour, I put up a site up using Crowdmap called mtadelays.crowdmap.com. I entered all the subway service changes from the MTA site, and told a few people about it. It got a little bit of Twitter buzz, but only one person submitted a report other than me. I think I was a little late to the game (I should’ve set it up on Sunday), but, it turns out, the tool also has a few shortcomings specific to this particular use case.
First, the tool was built for incidents whose geography is best described as a point (a latitude/longitude coordinate pair). But transit delays are best described as lines. When service is disrupted, there is a starting coordinate and ending coordinate for that event. Ushahidi had no good way of representing this, so I ended up just putting in two points for every incident. It was kind of a hack, and it also was misleading–the issue itself spanned an entire length of subway or bus line, not just the endpoints. I imagine that anyone who might’ve submitted a report would have run into the same issue, and I also suspect there are some incidents that are best described by a polygon instead of a point or a line.
Similar to the incidents being specified by a point in geographical space rather than a line, they are also represented by points in time rather than durations. Transit delays have a finite duration (even if the duration isn’t known up front). I would love if you could set incidents to expire after a certain amount of time (24 hours maybe?) rather than requiring an admin to go back in to the system and either edit or delete the incident. The overhead can be prohibitive.
Another issue is that it’s somewhat difficult to submit reports. You have to visit the website and submit a form. Actually that’s not really true–Ushahidi supports reporting via Twitter hashtags, SMS, or mobile apps (though there isn’t an iOS app yet). These are decent options, but you don’t really get the good geo-location data this way. It’s probably only a matter of time before the mobile options for Ushahidi reporting get really good, but for now it’s a bit clunky.
Despite these issues, though, it’s really interesting to see how far this kind of technology has come. It’s also interesting to think about how many different mature platforms Ushahidi is built on: there’s Linux, Apache, MySQL, PHP, the Google Maps API, the Twitter API, SMS, Email, RSS, probably many others. It’s pretty staggering when you think about it, and all I really had to do to set it up was press a “submit” button on a web page.
Even though the Haiti earthquake was a big moment in the spotlight for Ushahidi, I think we have yet to hear the last of them. They are building an amazing tool and I’m excited to see how it can evolve and continue to help communities deal with local crises and civic emergencies.
- Open-sourced, Crowd-sourced Ushahidi Platform Following Snowmageddon (treehugger.com)
- Adding location awareness to Ushahidi (mclear.co.uk)
- Mapping Snowball Fights and Sledding on Snowmageddon NYC & Boston (ushahidi.com)
- Buried in Snowmageddon 2010 Without a Shovel? Crowdsourced Sites Lend a Hand (readwriteweb.com)
December 28, 2010 1 Comment
Lots of people are buzzing lately that we’re in another “dotcom” bubble, roughly ten years after the last one. In mid-November, noted New York venture capitalist Fred Wilson described some “storm clouds” ahead for the tech investing space. He described what he sees as some unsustainable “talent” and “valuation” bubbles. This was around the time of the TechCrunch story about the engineer that Google gave $3.5 million to stick around.
Not too long after that, Jason Calacanis of Mahalo fame wrote a brilliant edition of his email newsletter in which he outlined four tech bubbles he sees right now: an angel bubble (similar to Wilson’s valuation bubble), a talent bubble, an incubator bubble (new firms cropping up to try and copy the successes of YCombinator and TechStars), and a stock market bubble.
And the frothy news just keeps on coming: Groupon this week allegedly turned down a $6 billion acquisition offer from Google (yes, that number has nine zeros and three commas in it ). Oh, and also, the Second Market value of Facebook is about $41 billion. That makes it #3 in the web space after Amazon and Google.
And, finally, there was this hilarious and depressing tweet going around yesterday from @ramparte:
But for me the proof was in two recent encounters with people decidedly not in the tech industry: my accountant and my banker. Each of them, upon learning what I do for a living, started talking to me about their tech business ideas. One was intriguing, one was, shall we say, vague, but everywhere I turn these days I feel like someone’s trying to pitch me on their idea for a social network, a mobile application, or whatever. And who am I? I’m a nobody. Can you imagine how many pitches people like Fred Wilson and Jason Calacanis get? It must be absurd. And in any case, what most of these folks don’t realize is that the idea is about 5% of a successful business. The remaining 95% is laser focus and nimble execution.
I feel lucky to be in technology right now–the economy is so crappy for almost everyone else. And that’s got to be one of the driving factors of this bubble right now. It’s one of the only healthy industries out there, and it’s attracting people who are disenchanted with whatever sick industry they happen to be in. Other driving factors of course are the recent explosive growth in mobile computing, the maturation of the web development space (frameworks like Ruby on Rails and Django that make web app development almost frictionless), and the rise of APIs and web services that allow vastly different sites to integrate their offerings.
It’s as if all the fishermen in the world have descended on one supremely awesome spot. A lot of people will catch a fish or two, some will catch enough that they’ll never have to fish again, but most won’t catch a thing.
 If anyone ever offers me $6 billion dollars for anything, please remind me not to turn them down.
December 5, 2010 No Comments
O’Reilly Media’s Web 2.0 Summit, which took place over the last few days in San Francisco, got me thinking, why is the web still only in version 2.0?
Tim O’Reilly himself coined the phrase Web 2.0 back in 2004 for his first conference of the same name. It was defined by an evolution in front end technologies like AJAX and bubble letters, back-end technologies like web services and RSS feeds, and business models like crowdsourcing and software as a service.
So given that we’re 6 years in to Web 2.0, when will we get to Web 3.0? The answer is never. No one will ever start calling it Web 3.0. For one thing, it’s not catchy. Web 2.0 has a certain ring to it that Web 3.0 doesn’t. Also, I think it will be difficult for people to come to a consensus on when technologies have evolved enough to move to a new version number. Web 2.0 was coined by a single person. Web 3.0 would have to be more organic. We much more likely to describe future “versions” of the web in descriptive phrases rather than numbers.
Tim Berners-Lee has always been against this nomenclature anyway. His alternative to “Web 2.0″ was the “Read/Write Web,” because of the way in which users became empowered to contribute en masse to the data on the internet. And in 2006, when asked what Web 3.0 would be, he said that a component of it would be “The Semantic Web,” or “a web of data that can be processed directly and indirectly by machines.” In other words, a web in which the machines can glean meaning from the data, in addition to simply manipulating it.
But I would argue that we are already at the next evolution of the web, and yet it’s not about semantics. It’s about context. This new phase of the web has largely been catalyzed by two breakthroughs: advances in the power and reach of mobile computing, as well as what Mark Zuckerberg calls “the social graph.” Both of these lend not meaning but context to data, and that is a very powerful thing.
Mobile devices can contextualize data around locations, photos, video, and audio (among other things). And of course the social graph connects data to people. The “Internet of Things,” as it continues to grow, will increasingly connect data to objects (shall we call it the “object graph?”). Although context is a step in the direction of semantics, we are still a ways away from getting machines to the point where they can interpret meaning from this data.
Indeed the “web” isn’t even about machines anymore. What was once a network of machines connected by wires is now a network of people, places and things connected by context. There is a new network growing atop the old.
Perhaps the semantic web will come in version 4.0 (although we still won’t call it that). But I think the best characterization of the most recent evolution of the web is the “Contextual Web” (I am not the first to call it such). Twitter, Facebook, Foursquare, the iPhone, Android, and many other prominent technologies can fall under this term, and I think it best describes the current proliferation of mobile and social technology that is spawning so many new and interesting businesses.
November 18, 2010 2 Comments
With the recent release of Windows Phone 7, Microsoft has finally figured out what Apple has known for many years: design sells. The interface is austere in a way few Microsoft products are. In some ways it’s almost too sparse–users navigate from screen to screen by means of two-dimensional “tiles” rather than 3D buttons. Ultimately, though, underdone beats over-wrought.
Granted, “design” is a huge umbrella term, covering everything from ergonomics to user interaction to typography to color palette, but all those things contribute greatly to people’s emotional response to a product. Good design makes a product trustworthy. It indicates the level of care that went into creating the product. It has the user’s best interest’s at heart.
The key differentiator in software used to be features. We thought that more features and more customizability meant happier customers. We were wrong–more features meant customers who were more confused and frustrated. Turns out, in an age of abundance, clarity is a scarce resource. Good design is the conduit of clarity.
Compare the Windows Phone 7 home screen above with the way Windows Mobile used to look:
Mom, have fun figuring out what exactly a “Comm Manager” or “SIM Manager” is.
Mint.com was able to take on a huge company like Intuit (and eventually get acquired by them for $170 million) by competing solely on design and user experience. I never got any direct mail from Mint like I do from Intuit. I never saw Mint.com on the shelf at Staples like I did Quicken. Mint has probably 1/10th the number of features that Quicken has. And yet, in the end. their beautiful design and simple interface added up to $170 million in value.
November 11, 2010 No Comments
Mint.com is a great website in a lot of ways. It’s great to be able to track all your financial data in one place, it has a really nice user interface, and it’s free. But when a company has access to so much of your sensitive data, it is an understatement to say that they need to be really careful with that data. Today Mint did something to lose my trust forever, something that led me to cancel my account immediately.
Early this morning I received six blank emails from firstname.lastname@example.org. Being in the business, I immediately recognized that this was likely coming from Mint’s staging (test) server. I went to their support forums, searched for this issue, and found this thread. I was the eighth person to comment and now there are over 200 comments and counting. The main frustration seems to be with the fact that Mint tried to reassure users that no customer data is stored on the test system from which these emails originated. That begs the question: then why did it store our email addresses?
The websites I work on store far less sensitive user data than banking and credit card information, and yet we never EVER store real user email addresses (or mailing addresses or passwords) in our test environments. The fact that Mint screwed this up reveals a major lack of competence in the area of security. And security needs to be their top priority, or at the very least a core competency. If they aren’t getting this right, what else aren’t they getting right? Consequently, I cancelled my Mint account just about as fast I as could.
The lesson here is not so much that companies shouldn’t store real user data on their test systems, but that if they do, they need to clearly communicate that to customers. If Mint had said, we store no customer data in our test systems other than email addresses, I may have questioned why they needed our emails on the test environment, but I still might have trusted them. When they said they stored NO customer data on stage, and yet somehow that environment had my email address, well, then all trust is lost.
October 13, 2010 13 Comments
The information overload problem is bad and getting worse. Nicholas Carr, in his sentimental but thought-provoking book The Shallows: What the Internet Is Doing to Our Brains, argues that our chemical addiction to new information is eroding our ability to concentrate on lengthy tasks. But even if this is true, it is only one part of the information overload equation.
There’s another side effect that I haven’t seen much written about. Information overload is destroying our sense of context. Old media made an attempt at contextualizing information. Lengthy articles in the New York Times Magazine, for example, wouldn’t just give me facts, but would also tell me why I should care about those facts. It would give me some background and connect these facts to other things I probably already cared about.
On the other hand, the vast majority of new media, by which I mean things like blogs and Twitter and even the 24-hour news channels, keep things short–that’s what people want, right?–and rarely build a contextual framework around the information they present. That’s not to say that blogs and Twitter aren’t useful for certain things. Twitter is an amazing way to keep up on the zeitgeist, a use case I missed when I first signed up for Twitter and dismissed it as useless. Still, most of the time when I engage with new media I find myself saying, “So what?” I may know what’s going on, but it’s increasingly difficult to see the bigger picture. I feel like I’m almost always “in the weeds.”
But not all hope is lost. If the Internet has proven one thing it’s that it’s an amazingly flexible platform on which to solve information problems of all sorts. I’d actually love to see someone build a solution to this problem, one that pulled my RSS and Twitter feeds, analyzed the content to determine what topics were being discussed, and searched the web for lengthier / meatier pieces on those subjects. I don’t think this would be that hard to do. The question is–would I then have time to actually read all this additional information?
Related articles by Zemanta
October 5, 2010 2 Comments
I use Foursquare a lot. You could say I’m part of the passionate but niche group that checks in at least a couple of times per week (more like a couple of times per day).
The odd part of it is that I can’t really tell you why I do it. Is it for the badges and mayorships? Not really–for all the talk of “game mechanics,” these things are mostly pretty lame. Is it for the specials I can get from local retailers? No, there aren’t enough of those available yet. Is it because of the serendipitous encounters I can have with friends? No. Having two young children precludes that quite a bit.
So if I’m not doing it for any particular reason, maybe I should spend some of my check-in time doing something productive.
Enter CloudMade’s Mapzen POI Collector. This iPhone app exists for one purpose, and one purpose only: to add and update points of interest to the open-source geo database OpenStreetMap–the Wikipedia of geography.
I realized today that instead of always checking in everywhere I go, I could earn a lot more karma points (if not badges and mayorships) by entering and editing points of interest everywhere I go. (By the way, if you want to search points of interest, don’t use this app; use something like the Open Maps app.)
Why the karma? The data I contribute using the Mapzen app is open and licensed under the Creative Commons SA license, so it can be freely and easily used in myriad applications that are competing with closed platforms such as Foursquare and Yelp.
So from now on I’m going to try and do my part to make the world a better place–instead of checking in on Foursquare, I’m going to spend that time making OpenStreetMaps so good it’ll give Google, Foursquare and Facebook all a run for their money.
Related articles by Zemanta
- What’s the point of Foursquare Badges? (theantisocialmedia.com)
- Sick of Useless Badges and Mayorships? Topguest Makes Check-ins Meaningful (readwriteweb.com)
- Foursquare Fatigue (mizzinformation.com)
September 12, 2010 1 Comment
Email is broken. In many ways. So are instant messaging and document collaboration. Google Wave was supposed to fix a number of these problems by making threaded and multi-user conversations easier to manage, and by introducing realtime chatting and collaboration into the mix. But Wave’s failure is also a fantastic illustration of a great idea and brilliant technical implementation totally overpowered by some absolutely awful product design.
Google’s famously spartan approach to search was the fuel for their explosive growth in the early 2000s. While sites like MSN and Yahoo were getting more complex and portal-like, Google offered an absurdly simple alternative: enter your query and click the search button.
Somehow over the years Google has lost this simplicity in many of its products, with Google Wave as the paradigmatic example. Wave was an engineering marvel, and I’m quite certain its mix of syncrhonous and asyncrhonous functionality will be used to good result in a number of other products, but the user interface was just dreadful. It made no sense and I couldn’t really ever figure out how to use it–and I work in software for a living. Imagine my mom using it.
Ultimately, I think Google Wave suffered from three fatal product design flaws:
- Complicated user interface – it’s kind of like an instant message client, except that you have to click something every time you want to add a new message. It’s kind of like email, but if I archive a thread and someone else adds a new message to it, the thread appears back inbox. It’s kind of like document collaboration, but doesn’t have all the features of Google Docs, let alone MS Word.
- No integration with email / docs / chat – Wave promised to solve the problems inherent in email, instant messaging and document collaboration, but if Google wanted it to supersede these things (did they even want to?) they should’ve integrated it into GMail, GChat or Google Docs. I don’t need yet another place to check messages, what I need is a better way to manage my existing communications. I often had to remind people over email or IM to check Google Wave for a message I sent them.
- Meatball Sundae – I’ve never read Seth Godin’s book Meatball Sundae but I love the metaphor. A meatball sundae is “the unfortunate result of mixing two good ideas.” Google Wave was a deep-fried meatball sundae. Was it email, instant messaging, document collaboration? It was all three, and yet it was none. The best products solve one problem brilliantly well. Google Wave tackled three problems and solved none of them.
Related articles by Zemanta
- Wave Goodbye To Google Wave (techcrunch.com)
- Google throws in the towel, Google Wave to shut down (geek.com)
- Google Pulls the Plug on Google Wave (gigaom.com)
August 4, 2010 3 Comments
It got me thinking that there’s been a lot less buzz lately about the BP oil spill, which is coming up on its 90-day birthday and shows no signs of slowing down. And now the presidential commission appointed to investigate the spill is recommending that the moratorium on deepwater drilling be suspended.
Are we all so distracted and ADD-addled that we’ve already forgotten the magnitude of this disaster? And while we’re at it, what about improving the financial system–did we ever see that one through to its conclusion? Are there still wars going on in Iraq and Afghanistan?
So I wanted to see exactly how much we’ve lost interest in the oil spill vs. how distracted we are by funny deodorant videos. I think this chart of Twitter trends from Trendistic.com says it all:
And here is a sad chart of the decline in Twitter mentions of the oil spill in the last 30 days:
Related articles by Zemanta
- Old Spice viral campaign a hit (guardian.co.uk)
- Why was there a lack of video coverage of the oil spill? (greenanswers.com)
- Old Spice Keeps Viral Marketing Fresh (voices.allthingsd.com)
- Presidential Oil Spill Panel Holds First Hearing (npr.org)
- Brian Clark Howard: Who is Helping Resolve the Oil Spill? (Infographic) (huffingtonpost.com)
- Who’s Cleaning Up the BP Spill & How Much Has it Cost? (Infographic) (treehugger.com)
July 15, 2010 No Comments