We'll respond shortly.
I’m excited to share three announcements today between Hortonworks and Pivotal that are important for the Apache Hadoop® and Big Data markets. Collectively these announcements demonstrate the power and increasing influence of open source development and business models on next generation data platform technologies.
Today, Pivotal announced they will be open sourcing the core of several of their key technologies including HAWQ, Pivotal GemFire and Pivotal Greenplum Database. As a company that’s 100% committed to open source, Hortonworks is delighted to see Pivotal make this move, and we are eager to join with the community to participate in the advancement of these projects. We were founded on the fundamental belief that the best way to deliver infrastructure platform technology is completely in open source. Done right, open source brings the largest number of developers together in a way that enables innovation to happen far faster than any single vendor could achieve and in a way that is free of friction for the enterprise.
We have also announced a unified approach to meet enterprise data management and analytics needs through a strategic and commercial alliance that aligns both companies around a consistent set of core Apache Hadoop-based capabilities including joint engineering and production support.
Specifically, we will work with Pivotal on joint engineering that accelerates the enterprise capabilities of Apache Hadoop® and YARN with Pivotal technologies like HAWQ and Pivotal GemFire. We will also work with Pivotal to certify the components of Pivotal’s Big Data Suite on the Hortonworks Data Platform (HDP) in support of customers deploying Pivotal technologies on centralized, YARN-based HDP environments.
Finally, Hortonworks will provide escalation level support for the Pivotal HD 3.0 product with Pivotal maintaining the support relationship with their customers and delivering Tier 1 support. When Pivotal customers need a bug fix or an enhancement for the Hortonworks supported components of PHD 3.0, Pivotal will work with Hortonworks to develop, test and release the fix or enhancement. This relationship will help Pivotal customers benefit from the latest Hadoop technologies in a way that’s predictable for the enterprise.
The Open Data Platform aims to enable collaboration and partnering among vendors and end user enterprises focused on technologies within the Big Data and Apache Hadoop® ecosystems. The initial focus of the Open Data Platform will be to promote a set of standard Apache Hadoop-related technologies and versions that will increase compatibility among Hadoop-based solutions and simplify the process for applications and tools to integrate with and run on any compliant system.
The Open Data Platform will be managed as a shared industry effort focused on promoting and advancing the state of Apache Hadoop® and Big Data technologies for the enterprise broadly as well as individual Apache Software Foundation (ASF) projects specifically. In order to accelerate enterprise adoption, Open Data Platform members will support community development, outreach activities, and publish business-focused and technical papers that help clarify the rollout of modern data architectures that leverage Hadoop.
The Open Data Platform aims to amplify interest and participation in Apache Software Foundation (ASF) projects by encouraging vendors and end users to contribute to ASF projects of interest as per the ASF guidelines in order to further the capabilities, quality, and use of the technologies. More details on the Open Data Platform can be found at this link.
We are excited about all three announcements and we look forward to working with our counterparts at Pivotal on accelerating Hadoop-powered use cases that deliver maximum value across comprehensive and integrated datasets.
Editor’s Note: Apache, Apache Hadoop, Hadoop, and the yellow elephant logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries.
|About the Author: As VP of Corporate Strategy, Shaun is focused on enabling Apache Hadoop to power the enterprise’s next-generation data architecture. Shaun has more than 25 years of experience in the software industry with experience that spans big data, cloud computing, application middleware, and enterprise open source software platforms. Shaun has a track record of building early stage and midsize software companies into successful market leaders and has held VP and Director level positions at VMware, SpringSource, Red Hat, JBoss, Princeton Softech, HP, Bluestone Software, and Primavera Systems. He holds a B.S. in Electrical Engineering from Drexel University. Follow Shaun on Twitter: @shaunconnolly.|