The Sundance Institute, responsible for the famous film festival, realized that their community of artists and audiences needed better digital connections. The group had an additional goal of improving web application performance at scale while managing costs in line with traffic, and they wanted a platform to help bridge data silos. To accomplish these goals, the organization turned to Pivotal for help with a redesign of the website and Cloud Foundry based technology with Pivotal Web Services to power their web applications. This post explains the background, challenges, approach, and results.
What if you could have the efficiency and scale of HDFS with the mature ecosystem of SQL? What if you could perform complex queries over tens or hundreds of nodes and petabytes of data? What if your existing SQL-compliant tools worked with this platform? What if you could connect your applications to this platform using standard ODBC or JDBC? Well—you can! HAWQ is the true SQL engine that is engineered for Apache Hadoop. In this episode, we explore why HAWQ was developed, how it works and some of the benefits of using it in a Hadoop-based world.
Pivotal employees recently volunteered to work with the San Francisco City Academy as a way to give back via Pivotal Serve Days. The group spent a morning in one of city’s most notorious neighborhoods, the Tenderloin, where a school has popped up to make a difference. Our fellow employees helped to install and deliver computers so that the students can have access to additional educational resources, something that truly excited the students and made our employees grateful.
Connected things are growing 5x in the next 5 years, and the connected car provides a functional and architectural pattern for all Internet of Things systems. This week at Mobile World Congress, Pivotal will give an under-the-hood demonstration of the sensors, mobile apps, big data, and data science behind a connected car dashboard, and this article explains the entire platform, each component, and how it can be applied to other industries.
The societal impact of big data technologies and data science practices was demonstrated in February, with the White House appointing DJ Patil as its first chief data scientist, as well as significant news concerning the financial services sector, student debt, government benefits, and more. Within the industry, the announcement of the Open Data Platform initiative, which includes Pivotal, Hortonworks, and a number of other industry leaders received much attention. Here’s our roundup of the top data science news of the month, both from Pivotal and beyond.
In this post, Pivotal’s Coté explains how the "real service platform" is your delivery pipeline along with some dialogue about unicorns, horses, donkeys, yaks, and fat boy scouts. After providing reference research on the desired and current state of of continuous delivery and integration, Coté underscores two key things for development teams—the importance of the feedback loop and where to focus on overall delivery process improvements.
In this episode, we take a quick look at how Orgs and Spaces work in Pivotal CF to help you organise all of your development efforts—to implement quotas where required, and to segment environments into development, test, production. In addition, we will also look at the kinds of role-based access control (RBAC) you can implement to ensure that users access just the resources that they need—and no more.
Today, we are announcing Informatica’s addition to the data lake ecosystem, spawned by Pivotal and Capgemini. This post provides a background on the motivations and hurdles for building the data lake and using it to help meet business goals. The article goes on to explain the benefits and what Informatica is bringing to the table alongside Capgemini and Pivotal’s new announcements about open sourcing the Pivotal Big Data Suite, the Open Data Platform, and new application services.
Last week, Pivotal’s managing editor of the blog, Stacey Schneider, spent the week at three major events expecting to talk mostly about data. While data was a key theme, the real struggle for customers centered around transforming aged enterprise businesses into technology companies to compete in today’s tech savvy world. In looking at the dichotomy between the companies that are being successful on their journey to transformation and those who are struggling, a common theme emerged: Pivotal Labs is the secret weapon. This post explains why.
In the wake of the Open Data Platform (ODP) initiative announced earlier this week, Pivotal’s Roman Shaposhnik, and Apache Bigtop founder, shares a history of how the ODP has been part of his personal vision for Apache Bigtop and the wider Apache Hadoop ecosystem. In this post, Roman explains how fragmentation occurs in open source, how open collaboration solves it, and why he decided to work at Pivotal and embark on this journey to do big things for big data.