I write this post after spending a week at the OpenStack summit in Hong Kong. My company, Nebula, has not yet launched in Asia, so I took the opportunity to participate in the sessions and talk to the developers, leaders, and users in the OpenStack community. It was the perfect chance to reflect on where we are today, and the future of the project that I helped start just over three years ago.
My conclusion? OpenStack has captured the hearts of developers, but not the minds of enterprise IT.
As the CTO of NASA and CIO at Ames Research Center, I had the opportunity to deeply immerse myself in an organization where thousands of old applications ran on tens of thousands of servers across thousands of networks in hundreds of datacenters.
While NASA may have a larger and more complex IT footprint than many organizations, all large enterprises seek to run all of these old applications in an environment which looks and acts just like the original computers and networks they were designed to run on.
[Want to learn more about OpenStack? See Google App Engine Swings But OpenStack Is King.]
As servers continue to get bigger and faster while software stays much the same, we have seen servers get virtualized, then the storage. Once we virtualize the network, we will finally be able to faithfully simulate the tangled mess of physical infrastructure that is today's enterprise datacenter. At that point, most software will be able to run on a single, homogenous system. As processors, storage, and networks continue to get exponentially faster and denser, it is conceivable that the contents of an entire datacenter could be virtualized and run on a single computer.
In short, virtualization maximizes the efficiency of running yesterday's PC-era-inspired software on today's PC-era-derived hardware.
This is a wonderful thing, and will keep most of the world's software running without intervention or modification for many decades to come. But this model has very little to do with OpenStack, Nebula, Amazon Web Services, or cloud computing in general.
OpenStack is an open-source reference implementation for infrastructure-as-a-service. OpenStack's community of developers is defining how physical computing, networking, and storage infrastructure are mapped to a set of logical services in a way that will form a new open foundation for a new generation of software that runs on service-driven, scale-out infrastructure.
The first enterprises to adopt OpenStack -- Internet companies like Yahoo and eBay; research institutions like Xerox PARC and CERN; service providers like AT&T and Comcast; government agencies like NASA and NSA -- retain some of the most talented computer scientists and engineers in the world. In most cases, these organizations are using OpenStack to power new, highly strategic, and often very large applications.
Efficiently building large-scale systems is becoming increasingly critical for nearly every business (or government) that extracts value from better understanding all of our web logs, GPS location data, social media graphs, financial transactions, retail transactions, stock market transactions, electronic health records, genomic data, photographs, videos, satellite imagery, and of course the data from all of the sensors in our mobile phones, wrist bands, watches, televisions, cars, and so forth.
At Nebula, I have the opportunity to speak to thousands of organizations about our product, and it is clear that the chasm between "enterprise IT" and "mission" organizations at most enterprises is growing larger and larger. Business units that operate computing infrastructure outside of "corporate IT" are often referred to as "shadow IT" in older enterprises. At tech companies here in Silicon Valley, that kind of "shadow IT" is referred to as "technical operations," or TechOps.
At top Internet companies, TechOps is home to some of the most talented (and well compensated) engineers in the world. These teams operate very differently from corporate IT. They do not manage servers, VMs, or software -- at least, not the way that most CIOs think of it. They often do not (or ever intend to) virtualize anything. In TechOps, very small teams deploy new software on fleets of hundreds or thousands of servers, often several times a day. Working closely with software engineers, these teams strive to increase the velocity at which new features can be deployed, and often ensure all features are tested at scale.
A new generation of infrastructure that powers new mobile and web applications and puts large amounts of data to good use (in most cases, at least) is being developed. Today, most of that development takes place on public clouds like Amazon Web Services, Google Cloud, and Microsoft Azure.
This new generation of cloud applications and the public clouds they are being built on have inspired the hearts of developers, but the mind of enterprise IT is still focused on providing "reasonable accommodation" for old applications.
Enterprise IT must either watch as their most strategic and critical applications are built on public clouds, or they must immediately invest in real, standards-based, API-driven private clouds.
The longer enterprise IT waits before providing a true private cloud, the larger the chasm grows between where the business has been and where it's going, and the greater the risk that IT will lose the hearts and minds of the innovators that are essential to the cycle of reinvention and crucial to the success of every enterprise.
Battle lines are forming behind hardware-centric and virtual approaches to software-defined networking. We size up strengths and weaknesses. Also in the SDN Skirmish issue of InformationWeek: Anonymity has a role in business communities. (Free registration required.)