Distributed computing won't evolve to the next level without new architectures for reliability and security, according to David A. Patterson, who has taught computer science at the University of California-Berkeley since 1977. So far, government researchers have not fully stepped up to this new mandate. Berkeley and Stanford have teamed up, however, on one important project leveraging recent advances in statistical learning theory a form of machine intelligence that could mark a breakthrough in how systems maintain themselves in the face of errors and attacks.
|
Dave Patterson
|
Distributed computing is core to a number of advances in electronics as we head into the consumer era, including peer-to-peer networks, grid computing, distributed sensor networks and ad hoc mesh networks of many varieties. Patterson has long been a proponent of distributed computing. He led the design of RISC I, the first reduced-instruction-set computer, which became the basis for the Sparc CPU of Sun Microsystems. He was also involved in Berkeley's 1997 Network of Workstations Project, which helped spark the move to large systems made up from clusters of commodity systems (see www.cs.berkeley.edu/~pattrsn/Arch/prototypes2.html). EE Times' Rick Merritt sat down with Patterson recently to talk about Patterson's work and get his take on where distributed computing's headed.
EE Times: What's the future for distributed computing?
Dave Patterson: What people complain about in computing today its not that computers aren't cheaper or faster; they talk about all the hazards of security and viruses and spam.
I worry we may be jeopardizing our whole infrastructure. Suppose consumers decide the Internet is filled with people trying to steal their money and the best thing is to just avoid it. In a couple of years, that could happen. So it would be wise for us to make security and privacy first-class citizens in computing.