Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

Why Is My NTP Server Costing Me $500/Year? Part 2: Characterizing the NTP Clients

In the previous blog post, we concluded that providing an Amazon AWS-based NTP server that was a member of the NTP Pool Project was incurring ~$500/year in bandwidth charges.

In this blog post we examine the characteristics of NTP clients (mostly virtualized). We are particularly interested in the NTP polling interval, the frequency with which the NTP client polls its upstream server. The frequency with which our server is polled correlates directly with our costs (our $500 in Amazon AWS bandwidth corresponds to 46 billion NTP polls [1] ). Determining which clients poll excessively may provide us a tool to reduce the costs of maintaining our NTP server.

This blog posts describes the polling interval of several clients running under several hypervisors, and one client running on bare metal (OS X). This post also describes our methodology in gathering those numbers.

NTP Polling Intervals

The polling intervals of ntpd vary from 64 seconds (the minimum) to 1024 seconds (the maximum)—as much as sixteenfold (note that these values can be overridden in the configuration file, but for purposes of our research we are focusing solely on the default values).

We discover that clients running on certain hypervisors correlate strongly in the amount of polling (e.g. the VirtualBox NTP clients frequently poll at the default minimum poll interval, 64 seconds).

Chart of NTP Polling Intervals

NTP Polling Intervals over a 3-hour period. Note the heavy cluster of dots around 64 seconds—the minimum polling interval

Close-up of the 64-second polling interval

A close-up of the 64-second polling interval (“minpoll”). Notice the dots are mostly VirtualBox with a sprinkling of KVM. NTP clients perform poorly under those hypervisors.

By examining the chart (the chart and the underlying data can be viewed on Google Docs), we can see the following:

  • The guest VMs running under VirtualBox perform the worst (with one exception: Windows). Note that their polling intervals are clustered around the 64-second mark—the minimum allowed polling interval.
  • The Windows VM appears to query for time but once a day. It doesn’t appear to be running ntpd; rather, it appears to set the time via the NTP protocol with a proprietary Microsoft client.
  • The OS X host only queried its NTP server once during a 3-hour period. Since this value (10800 seconds) is more than the default maxpoll value (1024 seconds), we suspect that OS X uses a proprietary daemon and not ntpd.
  • The guest VM running under ESXi performs quite well; although its datapoint is obscured in the chart, if one were to browse the underlying data, one would see that its datapoints are clustered around maxpoll, i.e. 1024 seconds.
  • The guest VM running under Xen (AWS) also performs quite well; its datapoints are also clustered around maxpoll.
  • The guest VM running under KVM performs better than the VirtualBox VMs, which is admittedly damning with faint praise. Their polling intervals tend to cluster around 128 seconds, with smaller clusters at 64 and 256 seconds.
Guest Operating System Hypervisor ntpd version Average polling interval (higher is better)
Ubuntu 14.04 64-bit VirtualBox 4.3.12 r93733 on OS X 10.9.4 4.2.6p5 126
FreeBSD 10.0 64-bit VirtualBox 4.3.12 r93733 on OS X 10.9.4 4.2.4p8 62
Windows 7 Pro 64-bit VirtualBox 4.3.12 r93733 on OS X 10.9.4 N/A 10800
OS X 10.9.4 N/A 86400
Ubuntu 13.04 64-bit AWS (Xen), t1.micro 4.2.6p5 1056
FreeBSD 9.2 64-bit Hetzner (KVM), VQ7 4.2.4p8 146
Ubuntu 12.04 64-bit ESXi 5.5 4.2.6p3 1048


1. Choosing the Hypervisors and OSes to Characterize

We decide to characterize the NTP traffic of four different operating systems:

  1. Windows 7 64-bit
  2. OS X 10.9.3
  3. Ubuntu 64-bit (14.04, 13.04, and 12.04)
  4. FreeBSD [2] 64-bit (10.0 and 9.2)

We decide to test the following Hypervisors:

  1. VirtualBox 4.3.12 r93733
  2. KVM (Hetzner)
  3. Xen (Amazon AWS)
  4. ESXi 5.5

Why We Are Not Characterizing NTP Clients on Embedded Systems

We’re ignoring embedded systems, a fairly broad category which covers things as modest as a home WiFi Access Point to as complex as a high-end Juniper router.

There are two reasons we are ignoring those systems.

  • We don’t have the resources to test them (we don’t have the time or the money to purchase dozens of home gateways, configure them, and measure their NTP behavior, let alone the more-expensive higher-end equipment)
  • The operating system of many embedded systems have roots in the Open Source community (e.g. dd-wrt is linux-based, Juniper’s JunOS is FreeBSD-based). There’s reason to believe that the NTP client of those systems would behave the same as the systems upon which they are based.

We wish we had the resources to characterize embedded systems—sometimes they are troublemakers:

  • The operating system of embedded systems that do not have roots in the Open Source community have a poor track record of providing good NTP clients. Netgear, SMC, and D-Link, to mention a few, have had their missteps.

Why Windows and OS X NTP Clients Don’t Matter

Windows and Apple clients don’t matter. Why?

  • They are not our NTP clients. Both Microsoft and Apple have made NTP servers available ( and, respectively) and have made them the default NTP server for their operating system.
  • They rarely query for time: Windows 7 only once a day, and OS X every few hours.

We suspect that fewer than 1% of our NTP clients are either Windows or OS X (but we have no data to confirm that).

Regardless of its usefulness, we’re characterizing the behavior of their clients.

2. Setting Up the NTP Clients

The ESXi, Xen (AWS), and KVM (Hetzner) clients have already been set up (not for characterizing NTP, but we’re temporarily borrowing them to perform our measurements); however, the VirtualBox clients (specifically the Ubuntu and FreeBSD guest VMs) need to be set up.

The 3 VirtualBox and 1 Bare-Iron NTP Clients

We choose one machine of each of the four primary Operating Systems (OS X, Windows, Linux, *BSD). We define hostnames, IP addresses, and, in the case of FreeBSD and Linux, ethernet MAC addresses (we use locally-administered MAC addresses [3] ). Strictly speaking, creating hostnames, defining MAC addresses, creating DHCP entries, is not necessary. We put in the effort because we prefer structure:

  • hostname↔IP address mappings are centralized in DNS (which is technically a distributed, not centralized, system, but we’re not here to quibble)
  • IP address↔MAC address mappings are centralized in one DHCP configuration file rather than being balkanized in various Vagrantfiles.

Here are the Four Hosts of the Apocalypse [4] (with apologies to St. John the Evangelist)

Operating System Fully-Qualified
Domain Name
IP Address MAC Address
OS X 10.9.3 00:3e:e1:c2:0e:1a
Windows 7 Pro 64-bit 08:00:27:ea:2e:43
Ubuntu 14.04 64-bit 02:00:11:22:33:44
FreeBSD 10.0 64-bit 02:00:11:22:33:55

Use Vagrant to Configure Ubuntu and FreeBSD VMs

We use Vagrant (a tool that automates the creation and configuration of VMs) to create our VMs. We add the Vagrant “boxes” (VM templates) and create & initialize the necessary directories:

vagrant box add ubuntu/trusty64
vagrant box add chef/freebsd-10.0
cd ~/workspace
mkdir vagrant_vms
cd vagrant_vms
for DIR in ubuntu_14.04 fbsd_10.0; do
  mkdir $DIR
  pushd $DIR
  vagrant init

Now let’s configure the Ubuntu VM. We have two goals:

  1. We want the Ubuntu VM to have an IP address that is distinct from the host machine’s. This will enable us to distinguish the Ubuntu VM’s NTP traffic from the host machine’s (the host machine, by the way, is an Apple Mac Pro running OS X 10.9.3).
  2. We want the Ubuntu VM to run NTP

The former is accomplished by modifying the setting in the Vagrantfile to use a bridged interface (in addition to Vagrant’s default use of a NAT interface); the latter is accomplished by creating a shell script that installs and runs NTP and modifying the Vagrantfile to run said script.

cd ubuntu_14.04/
vim Vagrantfile = 'ubuntu/trusty64' :public_network, bridge: 'en0: Ethernet 1', mac: '020011223344', use_dhcp_assigned_default_route: true
  config.vm.provision :shell, path: ''
cat > <<EOF
  #!/usr/bin/env bash
  apt-get install -y ntp
vagrant up

Now that we have set up an Ubuntu 14.04 as a client, let’s turn our attention to FreeBSD 10.0.

cd ../fbsd_10.0
vim Vagrantfile = 'chef/freebsd-10.0'
  # Use NFS as a shared folder 'private_network', ip: '' :public_network, bridge: 'en0: Ethernet 1', mac: '020011223355', use_dhcp_assigned_default_route: true
  config.vm.synced_folder ".", "/vagrant", :nfs => true, id: "vagrant-root"
  config.vm.provision :shell, path: ''
cat > <<EOF
  #!/usr/bin/env bash
  route delete default
  route add default
  grep ntpd_enable /etc/rc.conf || echo 'ntpd_enable="YES"' >> /etc/rc.conf
  /etc/rc.d/ntpd start
vagrant up

The FreeBSD Vagrantfile is slightly different [5] than the Ubuntu Vagrantfile.

3. Capturing the NTP Traffic

We enable packet tracing on the upstream firewall (in the case of the VirtualBox guests or the bare-iron OS X host) or on the VM itself (in the case of our AWS/Xen, Hetzner/KVM, and ESXi guests).

Here are the commands we used:

# on our internal firewall
sudo tcpdump -ni em0 -w /tmp/ntp_vbox.pcap -W 1 -G 10800 port ntp
# on our AWS t1.micro instance
sudo tcpdump -w /tmp/ntp_upstream_xen.pcap -W 1 -G 10800 port ntp and ( host or host or host or host )
# our our Hetzner FreeBSD instance
sudo tcpdump -i re0 -w /tmp/ntp_upstream_kvm.pcap -W 1 -G 10800 port ntp and ( host 2a01:4f8:141:282::5:3 or host 2a01:4f8:201:4101::5 or host or host )
# our ESXi 5.5 instance
sudo tcpdump -w /tmp/ntp_upstream_esxi.pcap -W 1 -G 10800 port ntp and host


  • we passed the -W 1 -G 10800 to tcpdump; this is to enable packet capture for 10800 seconds (i.e. 3 hours) and then stop. This will allow us to capture the same duration of traffic from our machines, which makes certain comparisons easier (e.g. the number of times upstream servers were polled over the course of three hours).
  • we used the -w flag (e.g. -w /tmp/ntp_vbox.pcap) to save the output to a file. This enables us to make several passes at the capture data.
  • We filtered for ntp traffic (port ntp)
  • for machines that were NTP servers as well as clients, we restricted traffic capture to the machines that were its upstream server(s) (e.g. the ESXi’s Ubuntu VM’s upstream server is, so we appended and host to the filter)

4. Converting NTP Capture to CSV

We need to convert our output into .csv (comma-separated values) files to enable us to import them into Google Docs.

VirtualBox Clients

Ubuntu 14.04

We determine the upstream NTP servers using ntpq:

vagrant@vagrant-ubuntu-trusty-64:~$ ntpq -pn
     remote           refid      st t when poll reach   delay   offset  jitter
+        2 u    4   64  377   52.586  -34.323   2.141
-    2 u   63   64  375   84.136  -28.303   3.513
-2001:19f0:1590:        2 u   56   64  377   91.651  -24.310   2.218
*     2 u   35   64  377   59.146  -32.741   3.297
+    2 u    2   64  377  147.590  -32.185   1.860

Next we create a .csv file to be imported into Google Docs for additional manipulation:

  tcpdump -tt -nr ~/Downloads/ntp_vbox.pcap src host $NTP_SERVER |
   awk 'BEGIN {prev = 0 }; { printf "%dn", $1 -prev; prev = $1 }' |
   tail +2 | sort | uniq -c |
   sort -k 2 |
   awk "BEGIN { print "polling interval (seconds), VB/Ubu/$NTP_SERVER" }
        { printf "%d,%dn", $2, $1 }" > /tmp/vb-ubu-$NTP_SERVER.csv

Notes regarding the shell script above:

  • tcpdump‘s -tt flag is to generate relative timestamps, so that we may easily calculate the amount of time between each response
  • tcpdump‘s src host parameter is to restrict the packets to NTP responses and not NTP queries (it’s simpler if we pay attention to half the conversation)
  • the first awk command prints the interval (in seconds) between each NTP response
  • the tail command strips the very first response whose time interval is pathological (i.e. whose time interval is the number of seconds since the Epoch, e.g. 1404857430)
  • the sort and uniq tells us the number of times a response was made for a given interval (e.g. “384 NTP responses had a 64-second polling interval”)
  • the second sort command sorts the query by seconds, lexically (not numerically). The reason we sort lexically is because the join command, which we will use in the next step, requires lexical collation, not numerical. (in other words, “1 < 120 < 16 < 2”, not “1 < 2 < 16 < 120”)
  • the second awk command puts the data in a format that’s friendly for Google spreadsheets
FreeBSD 10.0

We use ntp -pn to determine the upstream NTP servers.

Then we create .csv files:

  tcpdump -tt -nr ~/Downloads/ntp_vbox.pcap src host $NTP_SERVER |
   awk 'BEGIN {prev = 0 }; { printf "%dn", $1 -prev; prev = $1 }' |
   tail +2 | sort | uniq -c |
   awk "BEGIN { print "polling interval (seconds), VB/FB/$NTP_SERVER" }
        { printf "%d,%dn", $2, $1 }" |
   sort > /tmp/vb-fb-$NTP_SERVER.csv
Windows 7

The Windows server is easier: there’s only one NTP server it queries (, so we can filter by our VM’s IP address rather than the NTP server’s IP address:

tcpdump -tt -nr ~/Downloads/ntp_vbox.pcap dst host |
 awk 'BEGIN {prev = 0 }; { printf "%dn", $1 -prev; prev = $1 }' |
 tail +2 | sort | uniq -c |
 awk "BEGIN { print "polling interval (seconds), VB/W7" }
   { printf "%d,%dn", $2, $1 }" |
 sort > /tmp/vb-w7.csv

Like Windows, there’s only one NTP server (, so we can filter by our VM’s IP address:

tcpdump -tt -nr ~/Downloads/ntp_vbox.pcap dst host |
 awk 'BEGIN {prev = 0 }; { printf "%dn", $1 -prev; prev = $1 }' |
 tail +2 | sort | uniq -c |
 awk "BEGIN { print "polling interval (seconds), OS X" }
   { printf "%d,%dn", $2, $1 }" |
 sort > /tmp/osx.csv

Xen (AWS) Client

  tcpdump -tt -nr ~/Downloads/ntp_upstream_xen.pcap src host $NTP_SERVER |
   awk 'BEGIN {prev = 0 }; { printf "%dn", $1 -prev; prev = $1 }' |
   tail +2 | sort | uniq -c |
   awk "BEGIN { print "polling interval (seconds), Xen/Ubu/$NTP_SERVER" }
        { printf "%d,%dn", $2, $1 }" |
   sort > /tmp/xen-ubu-$NTP_SERVER.csv

KVM (Hetzner) Client

  tcpdump -tt -nr ~/Downloads/ntp_upstream_kvm.pcap src host $NTP_SERVER |
   awk 'BEGIN {prev = 0 }; { printf "%dn", $1 -prev; prev = $1 }' |
   tail +2 | sort | uniq -c |
   awk "BEGIN { print "polling interval (seconds), KVM/FB/$NTP_SERVER" }
        { printf "%d,%dn", $2, $1 }" |
   sort > /tmp/kvm-fb-$NTP_SERVER.csv

ESXi Client

tcpdump -tt -nr ~/Downloads/ntp_upstream_esxi.pcap src host |
 awk 'BEGIN {prev = 0 }; { printf "%dn", $1 -prev; prev = $1 }' |
 tail +2 | sort | uniq -c |
 awk "BEGIN { print "polling interval (seconds), ESXi/Ubu" }
   { printf "%d,%dn", $2, $1 }" |
 sort > /tmp/esxi-ubu.csv

5. Merging the 17 .csv Files

Next we need to merge the above files into one file that we can easily import into Google Docs.

for CSV_FILE in *.csv; do
  CSV_INDEX=$(( CSV_INDEX + 1 ))
  [ ! -f $CSV_TEMP ] && touch $CSV_TEMP
  ( join -t ,      $CSV_TEMP $CSV_FILE
    join -v 1 -t , $CSV_TEMP $CSV_FILE | sed "s/$/,/"
    join -v 2 -t , $CSV_TEMP $CSV_FILE | sed "s/([^,]*$)/$COMMAS1/" ) |
  sort > $$.$CSV_INDEX.csv


  • we use the join command to merge the proper fields together; this is so our scatterplot will display properly. The join-field is the polling interval in seconds
  • we use 3 iterations of join
    1. the first one merges the fields with common polling intervals
    2. the second one merges the polling intervals that are present in the first file but not the second
    3. the final one merges the polling intervals that are present in the second file but not the first
  • we invoke sort in order to keep our temporary files lexically collated, a requirement of join
  • we create a series of temporary files, the last one of which (e.g. 5192.17.csv) we will import into Google Docs
  • we need to perform one final sort before import (we need to sort numerically, not lexically):
sort -g < 5192.17.csv > final.csv

6. Mastering Google Docs

In order to create our scatterplot, we must comply with Google’s requirements. For example, each column needs at least 1 datapoint.

  • we add a value of 1 polling interval of 10800 seconds to the OS X column. During our 3-hour packet capture, our OS X host only queried its NTP server once, and we removed that packet (we measure intervals between packets, and we need at least 2 packets measure). Our data now indicates that OS X queries once every 3 hours.
  • we remove the column VB/FB/ That NTP server is unreachable/broken and has no data points.
  • we add a value of 1 polling interval of 86400 seconds to the VB/W7 column. Windows 7 appears to only query for time information once per day (not discovered in this packet capture but in an earlier one)


1 Math is as follows:

90 B / NTP poll
$500 total
$0.12 / 1 GB

× 1 GB / $0.12
× 1,000,000,000 bytes / GB
× 1 poll / 90 B

= 46296296296 polls = 46.29 Gpolls

2 The inclusion of FreeBSD in the list of Operating Systems is made less for its prevalence (it is vastly overshadowed by Linux in terms of deployments) than for the strong emotional attachment the author has for it.

3 To define our own addresses without fear of colliding with an existing address, we set the locally administered bit (the second least significant bit of the most significant byte) to 1.

4 The term “host” has a specific connotation within the context of virtualization, and we are deliberately mis-using using that term to achieve poetic effect (i.e. “hosts” sounds similar to “horsemen”). But let’s be clear on our terms: a “host” is an Operating System (usually running on bare-iron, but optionally running as a guest VM on another host) running virtualization software (e.g. VirtualBox, Fusion, ESXi, Xen); a “guest” is an operating system that’s running on top of the virtualization software which the host is providing.

In our example only one of the 4 hosts is truly a host—the OS X box is a true host (it provides the virtualization software (VirtualBox) on top of which the remaining 3 operating systems (Ubuntu, FreeBSD, and Windows 7) are running).

5 We’d like to point out the shortcomings of the FreeBSD setup versus the Ubuntu setup: in the Ubuntu setup, we were able to use a directive (use_dhcp_assigned_default_route) to configure Ubuntu to send outbound traffic via its bridged interface. Unfortunately, that directive didn’t work for our FreeBSD VM. So we used a script to set the default route, but the script is not executed when FreeBSD VM is rebooted, and the FreeBSD VM will revert to using the NAT interface instead of the bridged interface, which means we will no longer be able to distinguish the FreeBSD NTP traffic from the OS X host’s NTP traffic.

The workaround is to never reboot the FreeBSD VM. Instead, we use vagrant up and vagrant destroy when we need to bring up or shut down the FreeBSD VM. We incur a penalty in that it takes slightly longer to boot our machine via vagrant up.

Also note that we modified the to use a host-only network instead of the regular NAT network. That change was necessary for the FreeBSD guest to run the required script. Virtualbox was kind enough to warn us:

NFS requires a host-only network to be created.
Please add a host-only network to the machine (with either DHCP or a
static IP) for NFS to work.

Post a Comment

Your Information (Name required. Email address will not be displayed with comment.)

* Copy This Password *

* Type Or Paste Password Here *