Moving from chaos to order in your data environment

Sponsored By: CA Technologies

We live in an interconnected world where even the smallest events can have an enormous impact on an entire system. In technology, there are few areas where this in more in evidence than in applications.

Almost anything that touches an application will either improve or impair its performance. The challenge facing IT — a tall order, at that — is to determine whether this improvement or impairment is a product of infrastructure, coding change, API, network, application architecture, or connected service.

The emergence of big data has made things more complex. And make no mistake — big data has arrived in a big way:

  • Between 2014 and 2020, the total volume of big data will have increased to 44 zettabytes from roughly 4.4 zettabytes. (Source)
  • By 2020, every person on the planet will generate approximately 1.7 megabytes of data per second. (Source)
  • By 2020, there will be 450 billion business transactions (B2B and B2C) per day via the internet. (Source)
  • By 2020, there will be over six billion smartphone users. (Source)
  • In the next five years, the number of Internet of Things (IoT)-connected devices will increase by over 220 per cent. (Source)

With the emergence of big data, organizations’ prospects of moving from chaos to order in their data environment has gone from cloudy to downright bleak.

Complexity is now the order of the day. Modern applications run on virtual machines and containers, and branch outside of application servers to connect to any number of microservices to complete a transaction. This branching often occurs outside off-prem, in the cloud, giving the application architecture — once a traditional three-tier model — the appearance of a sprawling “spaghetti mess” of a biological ecosystem.

Technological progress has without a doubt made many things possible in business. The pre-internet world was one of walls and bridges — limits on communication and productivity, and a reliance on what we now know was woefully inadequate technologies. But there was a kind of bliss in this ignorance.

We may now live in a world where, more and more, anything is (or seems to be) possible, but there is also hyper-complexity, and with this hyper-complexity, more difficulty in making the link, when a problem occurs, between cause and effect.

Simply put, when something happens, it can and does ripple across interconnected components, generating an effect far from where the initial change or breakdown took place. This complexity is compounded by the fact that change is occurring much more often — often daily — and at an ever-increasing rate.

When application components are highly distributed, communication becomes network-centric, with independent microservices communicating asynchronously via APIs. This means a dramatic increase in messaging across a variety of systems and dependencies — all potentially meaning more complex conditions, alerts, system checks, which in total places greater strain on already taxed IT staff.

Complimentary white paper
CA Technologies has produced a white paper, “Power Digital Performance with a New Model for Application Performance Management,” in which it tackles the problem of data complexity, speaking plainly in such key areas as:

  • The new problem of data – a matter of not just volume but context. IT teams may have access to many excellent monitoring tools, but unfortunately lack “total vision” context.

    Example: A network alarm indicates increased traffic latency. Tools may identify when and where problems are occurring, but because data is managed in silos, support teams don’t know which business services across the entire range are being impacted.

    This problem is made worse by traditional approaches to baselining. Traditional approaches to monitoring cannot scale to manage increased volume, variety, and velocity of data produced by today’s dynamic app architectures. Result: unpredictable user experiences, higher costs, and lost resources and time.

  • Pursuing a unified model – Organizations may capture comprehensive application, infrastructure, network, container, and cloud activity — but it’s diverse and unstructured, collected and analyzed in a piecemeal manner. Full insights can be achieved when IT pursues a unified data model by correlating their data, layering it, and creating a multi-dimensional and cross-domain view.
  • User experience – Data-driven insights to: address business-impacting issues with actionable alerting; change impact analysis across time; gather and correlate evidence; and find and quickly fix performance issues.

Learn more about delivering actionable insights across increasingly complex and chaotic application environments — read “Power Digital Performance with a New Model for Application Performance Management

About CA Technologies
CA Technologies creates software that fuels transformation for companies and enables them to seize the opportunities of the application economy. Software is at the heart of every business, in every industry. From planning to development to management and security, CA is working with companies worldwide to change the way we live, transact and communicate — across mobile, private and public cloud, distributed and mainframe environments. Visit the CA website to learn more.

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Sponsored By: CA Technologies

Glenn Weir
Glenn Weir
Content writer at IT World Canada. Book lover. Futurist. Sports nut. Once and future author. Would-be intellect. Irish-born, Canadian-raised.