Category — tech
To long-time Objective-C developers – especially those with an interest in modern programming languages – Swift is a very welcome and exciting step forward. At the same time, it can be frustrating at times due to the (current) state of the developer tools.
This is from Duolingo’s thorough and fair assessment of the pros and cons of building a production app in Swift. I myself have been burned too many times by Swift in recent months (having to rewrite classes in Objective-C when things didn’t go as planned) to consider it production-ready. But I’m glad to see that Duolingo is having success with it.
January 15, 2015 No Comments
Via Steve Laniel, this entertaining harangue against modern web development is a must-read for all web developers.
April 2, 2014 1 Comment
Update: Based on some helpful comments on /r/bitcoin, I edited my original post to clarify that Bitcoin derives its value, only in part, from the costs required to produce it. However, without that it would be valueless, even if there are other things that contribute to its value. Not sufficient, but necessary.
Some fellow engineers I work with have been mining and trading Bitcoin since well before the mainstream hype of the last few months, and in talking to them I’ve become increasingly interested in it as well. It is one of the more elegant technological ideas to come along in a long time, and its greater economic, sociological, and political implications are also fascinating to me.
But when I first heard about it, I was hesitant to treat it seriously based on one fundamental doubt: how could a bunch of numbers spit out by a computer have intrinsic value in the same way that gold can? I understood how Bitcoin could have extrinsic value, based on things like trust and hype, but if that were all it were based on, why would Bitcoin be worth more than any other arbitrary currency one could create out of thin air?
And then I spent some time learning exactly how Bitcoin mining works, and discovered that there is, in fact, intrinsic value to the currency. (Funny how ignorance can lead you to dismiss things like that!) In order for Bitcoins to be created, a computer must solve a difficult math problem by guessing a number by brute force. This requires a running computer (these days, a powerful computer specifically built for this type of math problem), which in turn requires electricity, which was probably made with a fossil fuel or nuclear generator. So, in a way, you could say that the value of Bitcoins is at least partially derived from the fuel used to create the energy needed to power the computers that mine them.
But, you ask, what happens as computers get more and more powerful and energy efficient? Shouldn’t Bitcoins get easier and easier to mine, dropping the amount of energy required to mine them, thereby decreasing their intrinsic value? Turns out that part of the ingenious and elegant design of Bitcoin prevents this from happening. The difficulty of the math problem that the mining machines have to solve changes dynamically over time. The system as a whole aims to stabilize the difficulty such that these math problems can only be solved roughly once every 10 minutes. If the computers start to solve the problems faster, the difficulty across the system is increased. If the computers start solving the problems slower, the difficulty is decreased.
To be sure, there are other factors that contribute to Bitcoin’s value other than trust and hype. It shares many common characteristics with gold: durability, divisibility, combinability, homogeneity, and scarcity. All of these things factor together, along with the sociological stuff, to give Bitcoin its total value. But if it were possible to mine Bitcoins without expending resources, I believe their value would fall to zero. (There is another stopgap against this built into the technology: the total number of Bitcoins is capped at 21 million, so even if down the road it were theoretically possible to mine Bitcoins for free, only up to 21 million total could be harvested. That hard cap also contributes to Bitcoin’s scarcity, and therefore its value.)
And so, it is this fixed degree of difficulty, inherent in every single Bitcoin that will ever be mined, that ensures that there will always be some level of effort required, and therefore some baseline value in the coins. Without this fixed difficulty, computers would be able to simply pluck Bitcoins out of thin air, and despite all the other valuable characteristics of the currency, it would in all likelihood be worth nothing.
December 14, 2013 6 Comments
July 29, 2013 4 Comments
ECHELON is a code word for an automated global interception system operated by the intelligence agencies of the United States, the United Kingdom, Canada, Australia, and New Zealand, and led by the National Security Agency (NSA). I’ve seen estimates that ECHELON intercepts as 3 billion communications every day, including phone calls, e-mail messages, Internet downloads, satellite transmissions, and so on. The system gathers all of these transmissions indiscriminately, then sorts and distills the information through artificial intelligence programs.
Bruce Schneier, Secrets and Lies,2004, 2nd ed.
July 15, 2013 No Comments
Drew Crawford, in a long but well-researched essay on mobile app performance:
July 10, 2013 No Comments
A recent episode of the Planet Money podcast profiled Thomas Peterffy, one of the first people to experiment and be successful with high-frequency trading. They told the story of how he was doing algorithmic trading before any of the stock exchanges supported electronic trading, and before NASDAQ even existed. So how did he do it? That’s the fascinating part.
He made his money building a system that was able to assign a fair market price to stock options. He then compared these values to what the options were actually trading for, and arbitraged the difference. Back in the late 1970s when he first started, he would print out the numbers and bring them to the trading floor in a huge binder. When the stock exchange banned him from bringing the binder, he stuffed the papers into every pocket his suit had.
Then Peterffy got himself a system called Quotron, a computerized service that delivered stock prices to brokers (it was a replacement for the widely-used ticker tape system). If he’d used the system the way it was intended, he would’ve read the quotes as they came in on the Quotron, manually input them into his algorithm, run the numbers, and cashed in. But that wouldn’t have been that much better than just using ticker tape, and the fact that he had a computerized system meant the data was in there somewhere, in digital form. If he could figure out how to retrieve it he could pipe it into his system and save a crucial, time-consuming step.
Nowadays if we wanted to do something similar, we might look into whether the Quotron had an API, and if it did we’d query that for the information. If it didn’t have an API, well, we might look for another system that did.
But Quotron had no such ability. So he did what any hacker worth his salt would do. He broke out his oscilloscope, cut the wires on the Quotron, reverse-engineered the data signal, and patched it into his system. And you think screen-scraping is hard?
When NASDAQ, the first all-electronic stock exchange, came online, he was faced with a similar system. Brokers could trade directly on the exchange via computer. This was no doubt a huge breakthrough, but there was still no way his system could make the trades automatically. So, again, he busted out his oscilloscope and patched his way into NASDAQ.
We developers could learn from Peterffy. The ease of software engineering has made most of us too complacent. When Twitter’s API terms change, we complain about it for a few days, and then change our business models to suit the new rules. But the real innovation, the real interesting stuff, the way we’ll make $5.4 billion like Peterffy did, is by bending the rules and building systems that give us a leg up on the competition, or, better yet, improve people’s lives.
To be sure there are lots of hackers on the fringes of legality doing very interesting things, but the rest of us are somehow content to toe the line. We shouldn’t do anything that’s illegal, but we should get close. Innovation comes out of spurning the status quo, not complying with it. It’s time for people who know how to build things to bend the rules a little, and see what comes out the other side.
(The podcast was based on Peterffy’s story as told in the book Automate This: How Algorithms Came to Rule Our World.)
September 13, 2012 2 Comments
One of the hardest things for any software designer to do is to decide not to implement a feature. Many software projects have been delayed or even derailed by feature creep, or the tendency to widen the scope of a project during development. But in many cases, features that seem like “must-haves” during development can be deferred to later phases of development, or cut completely.
Today I just ran into another example, also from Apple. In Xcode, you can switch from a header file to its corresponding implementation file (and back) using the keyboard shortcut Command-Control-Arrow (any arrow). This is a really nice way of navigating back and forth while you’re creating new instance variables and methods for your classes. However, when you navigate in this way, the project browser at the left doesn’t update its highlight to indicate that you’re viewing a different file. Is this a bug? Probably not. It’s probably just the designers of Xcode deciding to rein in feature creep so that they can actually ship the product.
It’s so damn tempting to want to make sure every little bug is fixed and every little corner case is accounted for before you release your software. But, as they say, perfect is the enemy of the good. It’s crucial to know when something is good enough so you can ship it as soon as possible. With cut, copy, and paste, Apple finally introduced the feature into its third version of the iPhone’s operating system. By then they had already sold millions of phones to customers who decided they could live without that crucial feature.
March 14, 2012 No Comments
March 1, 2012 No Comments
I woke up today to this provocative article in The Guardian about how graphic designers are ruining the web. Naughton’s main argument seems to be that graphic design adds unnecessary bulk to websites, wasting bandwidth. Naughton is absolutely right that page sizes have increased over the last two decades of the web’s existence. He is also right that this is a problem.
However, he describes the problem as a “waste of bandwidth.” Last I checked, “bandwidth” is an infinite resource (unless maybe you extrapolate bandwidth to barrels of oil). The bigger problem is that more elements on a page (and bigger individual elements) will slow down page load times and potentially be frustrating for the user. If Naughton is saying that people who make websites should work to reduce the number and size of the elements on their pages, I completely agree.
But it does not then follow that websites also need to be ugly (he uses Norvig.com as an example of an underdesigned site that is compelling for its content if not its look and feel). Highly-designed websites need not be bulky. Just because the BBC News page sends 165 resources on every request to its homepage, doesn’t mean all designed sites do. NPR.org is a lean and mean website, requiring roughly 50% fewer requests than the BBC News. Yet I would say it offers a bit more of a user-friendly way to access information than Norvig’s site.
I’ll agree that some underdesigned sites are excellent because they are underdesigned: Craigslist.org and (the original) Google.com. But if Apple has taught us anything over the past decade, it is that things can be designed without being complicated and bulky. And that is the direction I’d like to see the web going in. That way we get to have our cake and eat it too.
- Graphic designers are ruining the web (guardian.co.uk)
February 19, 2012 8 Comments