We'll respond shortly.
At Pivotal, we see an overwhelming desire from customers to innovate with greater speed through software. In November, we launched Pivotal CF, the leading enterprise PaaS, powered by Cloud Foundry. Pivotal CF provides a turnkey private PaaS experience for agile development teams to deploy, scale and update applications. Customers have given us great feedback on capabilities to help them on-board new applications and leapfrog their competition using Pivotal CF. Providing a platform that enables teams to release software more often is not only a capability that Pivotal CF enables, it’s a principle we embrace delivering Pivotal Software. Just four months after the initial release we are delivering many new capabilities for developers, cloud operators and service providers. We will continue to a deliver a fast pace of innovation and frequent releases to help enterprises become excellent at software.
Pivotal CF 1.0 delivered many industry firsts for customers:
What’s new in Pivotal CF 1.1:
Let’s take a quick look at what some of these new capabilities mean for customers.
Deployed applications receive integrated logging so developers can go to one place to see what is happening with their applications. Pivotal CF now aggregates an application’s lifecycle events (e.g. staging, start, stop, restart), events from components like the DEA and Router, and application events (captured from STDERR and STDOUT) into a unified log stream. This allows developers to:
In Pivotal CF 1.1, log streams are scoped to a unique application ID and instance index so developers can quickly understand application behavior and pinpoint issues.
Let’s see a few examples of how easy it is to understand an app’s behavior by inspecting the log. Watch the demo.
First, we push the sample spring music app with
cf push spring-music, then immediately tail the log using
cf logs spring-music in another window:
We can see a ‘application staging’ request (
[DEA]) then a Cloud Controller application event (
[API]) followed by a detailed set of application staging events (
[STG]) and finally an ‘application start’ event (
[DEA]). The staging event log entries are particularly useful for debugging long-running staging tasks, e.g. an application with a large number of runtime dependencies and/or that performs long-running database initialization tasks. Finally logging output from the first application instance (
[App/0]) – captured from STDERR – is shown.
Adding a 3rd party tool for log search and analysis is also easy using a remote syslog drain. In this example, we’ve setup a Splunk syslog drain. Watch. We can bind to the remote service using a ‘user provided service instance’.
Let’s now go to our application in a browser. We refresh our Splunk event console and now see our log stream showing router (
[RTR]) apache formatted web log entries alongside Spring framework INFO logging. From here we can do full text searches and apply filters for debugging and log trend analysis.
The unified log stream is also useful for understanding why instances of an app crashed. Here’s how a sample app instance crash event appears in the new unified logging format – again notice the exit reason correlated with the timestamp, app instance index (0) and application GUID:
Developers simply upload their application files to Pivotal CF for an “it just works” experience. Buildpacks detect, download and configure the appropriate languages, frameworks, containers and libraries for the application, relieving the developer of this burden. Buildpacks are a shared approach with Heroku, IBM and a broad ecosystem of providers, ensuring support for almost any language. With Pivotal CF 1.1, cloud operators can bring languages, frameworks and application containers (PHP, Python, tcServer, WebSphere Liberty, etc.) developers love into the organization using admin buildpacks while controlling the order in which buildpacks are applied. Operators can also change buildpack configuration details.
Let’s take a look at the common use case of specifying the version of the JDK used for running Java applications. Watch the demo. We slip into the cloud operator role to fork the Pivotal CF default Java buildpack and specify the version of the Open JDK in
./config/open_jdk_jre.yml. The example below shows pinning the Open JDK version to 1.7.0_40.
We can then simply download the forked repo as a zip file and unpack it in our local environment. In our role as cloud admin we then log into the Pivotal CF instance using the new version of the CF CLI. Once logged in, we can use the buildpack commands to upload the modified Java buildpack:
This will upload the Java buildpack, name it java-buildpack-modified and place it at index 0, meaning it will be the first buildpack that is run when applications are pushed to Pivotal CF, ahead of the system supplied Java buildpack. After uploading, we can verify the buildpack and its position:
That’s it! Now whenever a java based application is pushed the modified Java buildpack will be used to stage and run the application, instead of the system supplied Java buildpack.
To verify installation push a simple web application:
Should we ever want to un-install the buildpack it can also be done via cf:
In the developer role, we can also specify a custom buildpack by URL when pushing an application:
In the operator role, we can control the buildpack environment based on our organization’s needs by selecting the ‘Disable Custom Buildpacks’ option in Operations Manager at install time, disabling the
-b custom buildpack option on
Operators looking to monitor the health and performance of their Pivotal CF deployment can now do so with the Pivotal Ops Metrics Add-On (beta). This add-on delivers typical machine metrics (CPU, memory, disk) and statistics for the various components of a Pivotal CF deployment via the JMX protocol:
Here’s an example of the JMX data from a DEA instance during two cf push operations. In the middle graph, the amount of memory available first goes down by 1GB and then an additional 2GB as more app instances are added. (The graph expresses the amount of available memory and disk as a percentage.) At the same time, the amount of CPU and memory actually used hasn’t changed much as the there’s no traffic going to the applications during the push request.
Operators can access this information through a JMX-compatible monitoring tool (e.g. JConsole, Java Mission Control) of their choice and integrate it with their existing monitoring and alerting infrastructure. This information can also be used for proactive monitoring use cases such as expanding capacity of Pivotal CF components based on historical resource utilization. For example, an operator could choose to expand capacity over time by scaling out the number of DEA instances when DEA memory utilization crossed a given threshold.
Add-on services are one of the primary modes for extending functionality of the Cloud Foundry platform, and can deliver a broad range of benefits to a software development team. Services can provide data persistence for applications, as well as search, caching, messaging, and more. But services not only enhance applications, they can also better enable development teams themselves, delivering self-service provisioning of any resource a service provider can automate, such as accounts on a continuous integration system or multi-tenant project management application.
Services can be deployed anywhere your users and their applications can reach them, and may be self-operated or provided by another team or organization. Integration is by way of a Service Broker, a component operated by the service provider which advertises a catalog of one or more services, and translates API calls from Cloud Foundry into service-specific requests for resources and credentials. For more information, see Cloud Foundry Services for operators and service authors, and our Developer Guide to Services for platform end users.
With the release of the v2 Service Broker API, and new operator-facing features in Cloud Foundry, providing end users with self-service, on-demand provisioning of new service offerings has become much easier. We’ve moved responsibility for catalog management and orphan mitigation out of the service broker and into the platform, removing the need for service brokers to read and write to the platform; all API calls are now outbound to the service brokers. By implementing a v2 service broker, service providers can support multiple Cloud Foundry instances; simply provide CF operators with unique credentials to your broker.
Along with these API changes, we’re putting control of the Service Marketplace into the hands of the Cloud Foundry operator. The Marketplace is the aggregate of all services advertised by all service brokers registered with a Cloud Foundry instance. With a URL and credentials obtained from a service provider, an operator can register the provider’s service broker with Cloud Foundry. Upon registration of a broker, the platform will fetch the catalog of services the broker offers. New service offerings are initially only available to the operator, they can then decide whether to make a service available to all end users, or only to particular organizations. For more information, see Managing Service Brokers.
Our goal is to enable operators to deploy and update distributed systems in minutes so enterprises can be more agile in responding to business needs. Pivotal CF Operations Manager automates large-scale service deployment by taking control of the underlying IaaS API to start distributed system components as a set of ‘jobs’ running across a resource pool of Linux containers and VMs. In Pivotal CF 1.1 these jobs are now started in parallel and component packages are pre-compiled. Cloud operators can now deploy Pivotal HD in just minutes. Watch CF BOSH deploy and scale a Hadoop cluster on AWS faster than Amazon Elastic MapReduce.
We are committed to helping a new generation of developers transform software delivery in enterprises and will continue to deliver a steady pace of innovation to