Did Amazon Just Kill Open Source?
Yaron Haviv | December 2, 2016
After this week at re:invent, it is clear that Amazon is unstoppable. AWS announced many more products, all fully integrated and simple to use and if you thought infrastructure companies are its competition, think again. The new Amazon offering competes with established database vendors, the open-source big data eco-system and container eco-system, security software and even developer and APM tools.
Open-source is a key ingredient, but Amazon seems to prove that usability and integration are more important to many customers than access to an endless variety of overlapping open source projects.
It is interesting to see how Amazon on one hand bashes the open-source eco-system and highlights the advantage of its own tools, while at the same time taking projects like Presto, which was developed in the open by Facebook, and turn it into a packaged, revenue generating product (the newly announced Athena service).
This should be a wake-up call for the tech and software industry !!!
Back in the days, we used to focus on creating modular architectures. We had standard wire protocols like NFS, RPC, etc. and standard API layers like BSD, POSIX, etc. Those were fun days. You could buy products from different vendors, they actually worked well together and were interchangeable. There were always open source implementations of the standard, but people could also build commercial variations to extend functionality or durability.
The most successful open source project is Linux. We tend to forget it has very strict APIs and layers. New kernel implementations must often be backed by official standards (USB, SCSI...). Open source and commercial implementations live happily side by side in Linux.
If we contrast Linux with the state of open source today, we see so many implementations which overlap. Take the big data eco-systems as an example: in most cases there are no standard APIs, or layers, not to mention standard wire protocols. Projects are not interchangeable, causing a much worse lock-in than when using commercial products which conform to a common standard.
How did we get here?
The tech industry is going through a monumental change driven by digital transformation. This changes the infrastructure and software stack dramatically, the old guard is in survival mode and we seem to be missing responsible tech leadership that will define and build a modular stack for the new age. Strong players like Amazon and Azure are building their own fully integrated offerings so the rest of us need to exercise responsibility and work together with a focus on integration, not code.
We don’t need 20 more Apache projects that do the same, just slightly better. We don’t need 10 more open source container management platforms and we can’t turn poorly architected frameworks into a de-facto standard. We need to start by defining the layers and components in the new stack, followed by APIs, protocols and common management paradigms. We should work to make existing projects and products fit into this new model, while adding better ones.
That’s the only way to get back to a decent user experience, one in which we can easily build, secure and operate integrated stacks from independent components, with the ability to swap parts if need be, without being locked to project specific or cloud provider APIs. If we won’t do this, we will all lose and become technically enslaved by the cloud.