acm - an acm publication


The Fallacy of Premature Optimization

Ubiquity, Volume 2009 Issue February | BY Randall Hyde 


Full citation in the ACM Digital Library  | PDF

Moore's Law makes it seem as if resource limitations are always a minor consideration. If there will be twice as much memory for the same price in 18 months, why bother to squeeze a factor of 2 from an application's memory requirements? If the CPU will be twice as fast by then, why bother to shave some running time from a program? In other words, why bother to optimize programs? Isn't it better to just get them running and let Moore's Law take us off the hook when resources are constrained? Randall Hyde argues that optimization is important even when memory and processor double regularly. Trying to do the optimization too early can be a futile time-waster.

Connection Failure


Very good article. Reminds me that good design and efficiency in design, is never premature. And that bad design can be even worse than optimisation because it often requires a complete rewrite.

— Jase, Wed, 22 Feb 2017 09:42:54 UTC

Just be careful not to micharacterise the more subtle argument that many people would make against premature optimisation: Making good architecture and clean, de-coupled code a first priority can lead to more performance gains in the long term for multiple reasons, without necessarily sacrificing maintainability or correctness, which is *usually* the primary concern above all others. One reason is that it's easier to optimise that 3% of code that is proven to be a performance bottleneck without breaking it. Another reason is that using higher level abstractions that might seem very inefficient at first glance can sometimes lead to major optimisations elsewhere. An example of this is in using immutable data structures, which you might do mainly to make code easier to reason about, but can end up allowing much more parallelism and other optimisations. And look a model like that of React.js, which adds a layer of abstraction, the virtual DOM, which ends up paying massive dividends in performance (especially when combined with immutable data structures). Something to remember in discussions like this is priorities will differ according to your domain (see Joel Spolsky's "Five Worlds"). There are domains where algorithmic micro-optimisations are important every day (e.g. firmware for a high volume network switch), and domains where it really is not necessary for the programmer to ever think about performance (e.g. a small company's internal web-app written in 2 weeks and used by 10 people). So no one should be making generalised recommendations and they should be stating their assumptions about which domains they are talking about. Having said that, I agree that most in most domains people should give some thought to performance during the design and throughout development, rather than saving it all of the end. I wouldn't necessarily consider this premature. And if you take the 3% as a percentage of your coding hours then it might be about an hour every week.

— Prash, Sat, 23 Jan 2016 20:48:05 UTC

This article is very useful for me I learned so many things it is great article for all software engineers.

— Bharathi Gonala, Wed, 21 May 2014 14:44:05 UTC

Great Article! This is my research area, I think the writer did a great job in drawing the line between what could be avoided and what should never be avoided.

— Feras, Thu, 24 Feb 2011 20:55:53 UTC

Leave this field empty