We'll respond shortly.
Pivotal Cloud Foundry is now generally available to run on Google Cloud Platform (GCP). Now enterprises can build, deploy, and run cloud-native apps on the same infrastructure that powers Google Search, Google Maps and YouTube. PCF deployments can be enhanced and extended with Google’s data products (like BigQuery and the Cloud Machine Learning platform).
In this Pivotal Insights episode, host Jeff Kelly talks with Dormain Drewitz, head of marketing for Pivotal’s data business, contemplating the data hugger mindset and proposes some strategies for changing hearts and minds when it comes to sharing data. Also, Salesforce’s AI efforts and the wider implications for the blending of insights with applications.
Pivotal Insights host Jeff Kelly talks with Pivotal Greenplum’s Tim McCoy to preview the new Greenplum Command Center (GPCC), a new web-based monitoring and administration interface that is currently in beta and expected to be released later in 2016. They discuss the design philosophy behind the new GPCC, the driving reasons for overhauling the user interface and, most importantly, how the new and improved GPCC UI is going to make life easier for Greenplum administrators.
Data microservices play an important role in supporting data-driven applications—those applications that consume and put to use insights from Big Data analytics and data science systems. In this episode, host Jeff Kelly talks with Vinicius Carvalho, a platform architect at Pivotal, who is at the forefront of the data microservices movement. The pair discuss what a data microservice is, what the benefits to building applications using data microservices, and provide some examples of the types of applications that data microservices enable.
As we reach the first release of Apache HAWQ, Pivotal is providing new single- and multi-node sandbox environments for architects, developers, and big data administrators to use. Support for Pivotal HDB, our version of Apache HAWQ, is also included. This article explains the background, design, and capabilities of our new sandboxes, including expanded test options for a multi-node environment to better mirror real-world scenarios. Download links included.
Big Data can seem complicated and overwhelming to many in the enterprise, especially to non-technical business folks. Indeed, predictive analytics and machine learning are complex disciplines. That’s why data scientists—those mythical creatures with a harmonious blend of math skills, statistics expertise, business acumen and a knack for storytelling—are in such high demand. But the conceptual framework for operationalizing Big Data in the form of smart applications follows a fairly simple formula—or recipe—that almost any technical professional can grasp. It's not for unicorns alone.
If you want to spin up a Apache Hadoop cluster, you need to grapple with the question of how to attach your disks. Historically, this decision has favored direct attached storage (DAS). However, technology has advanced and shared storage costs have declined, making network attached storage (NAS) a cleaner approach for many. This article lays out the cost-benefits in today’s technology landscape, including relevant performance data to help you decide if NAS or DAS is right for your Hadoop deployment.
Over the past half century or so, the art of developing software has elevated itself to a point where how well you develop software quite literally can make or break you in a market. At Pivotal, we developed this edu-taining quiz to test organizations on how fluent they are in modern development techniques, and even provided a handy list of resources to help you learn about areas you may not have embraced yet.
In this post, Pivotal Data Scientist, Scott Hajek, explains how to effectively utilize an MPP database as a model factory, a repository of data science models from which users can select and apply to new data. In this post, he explains how to generate, store and use metadata or binary formats to expand the variety of data types supported in your model factory. Finally, he translates how the same concepts for handling binary data inside the database can be applied to image, audio, and video processing.
The American government has a new “moonshot” initiative that started this year: to cure cancer. One of the identified roadblocks to the cure is how data is shared between organizations, or rather how it is not shared currently. Pivotal agrees with this assessment, and is working with many of our customers who are clinical research institutions to increase data availability and improve the ways it can be harvested and analyzed. Join us for a webinar on July 20, at 7 am PDT, to discuss with Aridhia how they are enabling ways to change how diseases are understood, managed and treated.