We'll respond shortly.
Pivotal HDB 2.0, the Hadoop Native Database powered by Apache HAWQ (incubating), became generally available last week. This release marks a major milestone in the technology’s evolution from it’s massively parallel processing (MPP) roots towards a new category of cloud-scale analytical database, deeply integrated with the Apache Hadoop ecosystem. So, the technology is cool, but why does this really matter? In this post we’ll look at this release through the lens of digital transformation requirements.
In this post, Pivotal data strategist Jeff Kelly covers new research on the topic of cloud analytics and explains some of the reasons that agile practices continue to influence data-centric teams and technologies. The catalyst behind this current wave of agile data is cloud-based analytics, which make it easier to access sandboxes, support various data store technologies, and reduce the risk of uptime from performance hits. The cloud may be winning over all analytical workloads.
The new release of Apache MADlib (incubating) 1.9 includes several new features, including path functions, which can be used to perform regular pattern matching over a sequence of rows, and then to extract useful information about the pattern matches. This useful information could be a simple count of matches or something more involved like aggregations or window functions. Path functions are applicable to a wide variety of use case cases including clickstream analytics, customer churn, predictive maintenance, fraud detection and multichannel marketing. This post provides an overview of path functions and gives an example from e-commerce.
The new Greenplum Database 126.96.36.199 sandbox has been released as an Amazon EC2 AMI in addition the VMware and Virtual Box formats. This post explains where to get it, what instances sizes to use, access setup, use of data sets, and related information.
Today, Pivotal is announcing the release of Pivotal Greenplum 4.3.8. It features the ability to run one analytical query and return results for data that sits across both on-premise data warehouses and clouds, starting with AWS S3. Now business intelligence, data science, or advanced analytical workloads can run in near-real-time across hybrid clouds, breaking down silos, and opening up a new world of possibilities.
Is BI dead? According to some industry pundits, it seems to at least be reincarnating. The original business drivers behind BI are becoming less relevant as software and data-driven companies continue to drive competitive differentiation. At our Strata + Hadoop World keynote this Wednesday, we will explain what holds the key to these competitive advantages and how business intelligence must deliver information in context so that it drives action.
In this week’s Build Newsletter, we focus on all of the innovations coming out of South by Southwest, one of the world’s biggest stages for showcasing software and its incredible ability to reshape our culture. Starting off with Obama’s keynote perspective on the importance of embracing technology, we highlight some of the incredible innovations showcased at the event including apps that will feed the hungry, voice-based interactions that allow people silenced by ALS to communicate, and how big data and advanced analytics are making new waves in government.
The Apache Software Foundation (ASF) is one of the open source organizations for the Google Summer of Code 2016 (GSoC 2016) program. As a sponsor of the ASF, Pivotal is keen on supporting students looking to work in the complex and growing field of big data by developing features across a number of ASF Incubating projects that power up our data products including Apache Geode (incubating), Apache HAWQ (incubating) and Apache MADlib (incubating). For students around the world, it also offers an opportunity to pair and learn from Pivotal’s data engineers, as well as earn $5500! Deadline to apply is Friday, March 25, 2016.
In this perspective piece, Pivotal's big data strategist, Jeff Kelly, covers the story of Landr. The company uses machine learning to master music tracks without a human touching them. Along with many other use cases, this raises the question, "Will machines replace or complement humans?" Kelly explores the topic and offers some key considerations and conclusions.
To celebrate Apache Geode’s 1.0 release last week, this article will examine ways in-memory data grids can be leveraged to transform your customers’ experience, your decision-making capabilities, and your bottom line. Specifically, we’ve highlighted four use cases demonstrating how some of our enterprise customers use in-memory data grids, like Apache Geode (incubating) and Pivotal GemFire, to support high-performance, real-time applications, without compromising speed or safety of valuable data.