Press "Enter" to skip to content

Cloud- and Heterogenous Computing Will Stretch Network Performance

As I was browsing through my Google Reader, an article in Network Performance Daily caught my eye. It talks about Gartner’s list of the “10 most important strategic technologies of 2009.”

 One of the technologies listed is heterogeneous computing. Over the decades, computing has evolved from tasks running on a single computer to applications that span multiple processors and computers on a network. With technologies like TCP, UDP and IP, we have been able to network computers together so that they can collectively perform complex tasks; with multi-threading, applications can be split up to run concurrently on multiple processors; with virtualization, we can host an application on any processor; and with cloud-computing, the user interface can be separated from the computer that does the real processing.

“Isn’t heterogeneous computing the next step?” the author asks, with the reasoning, “Programs that can be run from anywhere, and processed on any computer – or all computers – on the network, without regard to what type of hardware are in the computers or where the computers are physically located?”

That could very well be the next step. And as the article says, when highly compute-intensive tasks such a video rendering or complex modeling are split into parallel threads and farmed onto a heterogeneous computing network comprising of powerful processors, we may start seeing the network bandwidth as the critical resource.

In the field of parallel computing, using arrays of processors that are tightly coupled and housed within a multi-processing computer, the ratio of compute power to communication bandwidth has traditionally been the critical consideration. Communication between processing elements was typically using shared memory architecture or using specialized communication links between the processors.

In the context of today’s heterogeneous computing network, processors are more loosely coupled, and much of the communication will be over the network. As we start seeing applications running on bigger and bigger clusters of high-power compute servers, the performance demands — high-bandwidth, low-latency, zero-loss, etc. — on the networking infrastructure will keep increasing.

Perhaps this will create the need for a new kind of networking infrastructure that is specifically geared to meet the challenges of such applications.

Share:
  • Print
  • Digg
  • StumbleUpon
  • del.icio.us
  • Facebook
  • Yahoo! Buzz
  • Twitter
  • Google Bookmarks
  • Add to favorites
  • email
  • Google Buzz
  • MySpace
  • RSS
  • Slashdot
  • Technorati
  • LinkedIn

One Comment

Leave a Reply