Papers /

Oskin-CACM 2008

Reading

Outdoors

Games

Hobbies

LEGO

Food

Code

Nook

sidebar

Oskin-CACM 2008

The Revolution Inside the Box

Oskin

memory performance parallel programming
concurrency multicore computer architecture 
 
article{oskin:cacm2008,
  title="The Revolution Inside the Box",
  author="Mark Oskin",
  journal="Communications of the ACM",
  volume="51",
  number="7",
  month="July",
  year="2008",
  pages="70--78"
}

Ever-increasing performance has been driving IT industry

  • "Image if other industries, such as the auto or airline business, had at their core a driving source of exponential improvement."
  • But we've hit diminishing returns of ever more complex sequential architectures
  • It's too difficult now to translate transistor count growth by Moore's Law into exponential performance increases
    • Ever deeper pipelines and complex architectures are difficult to design and validate, increase critical path lengths, and increased power consumption

Processor manufacturers have "bet their future" on multicore

  • Many smaller, simpler core designs are easier to develop and test
  • Can just map out cores with manufacturing errors
  • Can continue to offer exponential peak performance growth
  • Clock frequency can be reduced, wire delay is no longer as large an issue
  • But software needs to leverage those cores well...

Unlike previous parallel computing attempts, this stands to become much more mainstream

  • For SMP, processors cost more and applications didn't use them
  • For multicore, equivalent single-core processors will cost more, or be impossible to build
  • But software needs to leverage those cores well...

Parallelizing legacy code is most probably a non-starter

  • But adding additional functionality in a parallel fashion is likely to add significant value utilizing available processing facilities
    • For example, much better spelling, grammar, and content checkers or voice synthesis bolted onto word processors
  • Programs using some framework, e.g. an SQL database, may also see improvements if that framework is parallelized

Computer architecture and programming languages research is being seriously revitalized by this

  • Memory wall: Bandwidth to/from main memory and consistency are as big, if not bigger, concerns in multicore architectures
  • Power: Techniques for reducing *and evaluating* memory consumption will require continued development
  • Design Complexity: Designs may begin to go in reverse, simplifying, to produce more energy efficient, cheap processors that rely on parallelism for performance
    • Interesting idea: Heterogenous chips, which have a few complex CPUs for some sequential tasks, and many small, cheap, simple CPUs for the bulk of tasks, optimizing cost and efficiency
  • Reliability: Feature sizes may actually go up or at least stabilize with multicore, reducing concerns about particle-induced errors
    • Redundant computing w/ multicore as well as runtime and manufacturing-time error detection will take its place though
  • Evaluation techniques: Simulated evaluations have all the same problems as in networking, with difficulties re. repetition, etc
    • Multicore introduces new problems however, for example difficulties in simulating an exponential number of processors on sequential machines...
  • Instruction level parallelism: There will still be advantages to be had in very fine grained parallelism, at a level likely to not easily utilize multicore architecture
  • Education: Need to get parallel programming better incorporated into curricula
Recent Changes (All) | Edit SideBar Page last modified on September 02, 2008, at 03:26 PM Edit Page | Page History