Network Computing is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Laying A Foundation For Distributed Computing's Next-Gen: Page 8 of 9

Statistical learning theory seems to be more resistant to bad inputs, so it's hard to screw up. That ties in with the need for security. If you are throwing bad data in, it will be harder to ruin the system. So it has a nice ruggedness.

EET: What's the downside of this approach?

Patterson: We know machine learning has false positives. They may occur, say, 20 percent of the time. So what we will try to do is define actions that won't hurt the system if they do something that wasn't needed. Ee have to build compensating actions that are fast, predictable and not incredibly damaging.

It turns out, that's not such a terrible design constraint. In fact, that would probably be the basis of a really good system.

For instance, some people are working on ideas like mutating protocols. You can change the protocol being used by a system to avoid security attacks. If you were wrong, it would not be that bad; you just would go to the next protocol.