Follow Tweet This Facebook LinkedIn google+
Industry talking to customers What's this?

Container monitoring: finding the right size and shape for your workloads

Published: July 3rd, 2018 By: Lynn Greiner

CA Technologies

Today, on the whole, we’re pretty comfortable with cloud technology and its components. Virtual machines (VMs) – yawn – are old hat, having been around for over half a century in one form or another, and in the main we have their management nailed.

But now there’s a new kid on the block, and it presents completely different challenges.

Containers are packages of software, including everything it needs to run: code, runtime, system tools, system libraries, and settings. They’re self-contained, and isolate the software from anything else in the environment. But they’re also lightweight; because they run on top of the operating system kernel, they use a fraction of the memory of a virtual machine (in fact, you can run containers in a VM if you so choose).

Containers are the ideal home for the other big trend: microservices. Microservices are single components of a larger application. They’re designed to do one thing very well, and communicate with other components to form the entire application. Yes, they could be run in separate VMs, but that would be an incredible waste of resources. Containers are the ideal solution.

But think about it – for every application, there could be dozens of microservices, each living in its own container. To manage them, we need to know what’s in each container and how it runs. And to make things even more interesting, containerized systems were designed to be distributed, so it’s impossible to look at a pre-produced manifest and be certain it reflects reality.

It doesn’t help that there’s no agreement about how to monitor containers in the first place. Some believe that the ultimate measure is how fast the service responds to the user; the question then becomes, how do we use those measurements to improve the services? Others are convinced that each system is so unique that it needs a monitoring mechanism created especially for it. In theory, that suggests that building the monitoring could be automated, similarly to the way the container engine constructs a container image from a build file.

Looking for the right fit
Traditional monitoring solutions often don’t fit containerized environments, for several reasons:

  1. The ephemeral nature of containers. It doesn’t make sense to track individual containers, but rather clusters of containers and services. And it’s virtually impossible to poll containers the way we poll servers, which means containers must contain an agent to push information to the monitor.
  2. The proliferation of objects, services, and metrics to track. Compared to traditional architectures, there are many more things to monitor. A traditional stack of operating system and application may have 150 metrics, while a 10 container cluster on one host could have 1150.
  3. Services are the new focal point for monitoring. A microservice may be composed of several processes, each running in its own container. Monitoring needs to be performed within and across containers to accurately gauge performance and health.
  4. A more diverse group of monitoring end-users. In today’s DevOps world, IT staff aren’t the only ones monitoring applications.
  5. New mindsets are resulting in new methods, including machine learning and analytics.

Use cases
There are four major use cases to consider:

  • Knowing when something is wrong. Alerts based on symptoms (e.g., latency) not potential causes (e.g., CPU usage) is critical to avoid alert fatigue.
  • Having the information to debug a problem. Using a tree structure to track down issues, starting at the root service.
  • Trending and reporting. Critical for everything from capacity planning to determining the pricing model for the service.
  • Plumbing. Moving information between systems – for example, a function sending sales per hour to a business intelligence dashboard could be built separately, but if a monitoring system allows custom data sources and lets you extract captured data, its utility increases.

Design goals
Of course, managing containers doesn’t begin when they’re deployed, it starts at the design stage. Applications need to be built to make them operations-ready. To that end, design goals for containerized applications should include:

  • Acknowledge that services could be interrupted at any time, and accommodate that risk with functions like graceful shutdown (and cleanup), and the creation of checkpoints where appropriate.
  • Expose an interface to allow for health checks by the container platform to allow it to identify and remediate container errors.
  • Include a mechanism to reliably identify a faulty instance, and its root cause.
  • Provide full context to the operator in logs; this could include container ID, container host, container image, and the container platform’s provided metadata.

Learn more about container monitoring and management in this in-depth report.