Is your AI data wrangling out of control?

We’ve all read articles extolling the limitless potential of Artificial Intelligence (AI) and its cousin, machine learning. As a data analyst, you may be nervous about the sluggish progress your AI project team is making after having slid by multiple milestones. You are worried that management will lose patience and cancel what you see as an exciting project with tangible business value.

Most likely, your problem is the tedium, high effort and unpredictability of data wrangling that causes:

  1. Stalled AI projects.
  2. Increased AI project costs.
  3. Doubts around AI project insights and recommendations.
  4. Disappointing benefits from AI projects.

Here are the considerations that will position the data for your AI project for success.

Data wrangling struggle

Data wrangling refers to all the effort data scientists and software developers invest in data preparation before the actionable insights you hope to gain from data exploration and data analytics will be revealed.

You are acutely aware that data wrangling has been consuming 60 – 80% of the effort of data analysts and data scientists for decades now. The percentage has not decreased over time despite vastly improved software and an increased emphasis on data stewardship in many organizations. The percentage has not decreased because data volumes have grown exponentially over the same time period. This data volume growth has consumed the value of all the data management improvements that many organizations have implemented.

Improving data wrangling

The following categories of software can help you bring data wrangling under control:

  1. Data visualization.
  2. Data preparation.
  3. Extract, Transform, and Load (ETL).
  4. AI-driven data improvement insights.
  5. Robotic Process Automation (RPA).
  6. Custom SQL.

You can speed up and improve data wrangling by using this software and implementing clearer data improvement steps to make data preparation:

  1. Less tedious for expensive, experienced data experts.
  2. Less consumptive of staff effort overall.
  3. Quicker and therefore cheaper.
  4. More likely to achieve significant data quality improvements.
  5. More operationalized to run on a predefined schedule, in a governed and scalable manner.

All the steps below are performed to varying degrees by every AI project before moving on to the data analytics step that produces the business value. If you can’t relate the planned work of your project team to these steps, then it’s highly likely that your AI project is inadequately organized and is at risk of not reaching the project goal.

 

Discovering data

Data discovery is the first step. Its goal is to identify the data sources required to achieve the goal of your AI project. Typical data sources include:

  1. Operational datastores related to on-premise applications and to externally-hosted Software-as-a-Service (SaaS) applications.
  2. Small informal datastores, usually Excel workbooks, that house crucial bits of information.
  3. Data warehouses and data lakes.
  4. Public free and subscription data.

The benefit of data discovery is that it provides an initial indication of the:

  1. Extent to which your organization’s data can support the goal of the AI project.
  2. Likely data preparation effort.

Data discovery is typically performed by analysts interviewing business and IT staff and does not require software support.
If your AI project team hasn’t quickly identified the required data sources, then your project is off to a poor start. You may be dealing with:

  1. Fundamental disagreements about the project goal and how to reach it.
  2. An overly ambitious AI project goal.
  3. Significant gaps in your application portfolio that must be addressed before the AI project can succeed.

Profiling data

Data profiling is the process for detecting the magnitude of various data problems in every selected data source. Data problems typically include:

  1. Missing data – either missing rows or nulls in columns where a value must exist.
  2. Invalid data –values that are not on the list of allowed values for a column or values that are outside the allowed range of values.
  3. Invalid relationships – where one of two values in two related columns in a row or in related tables cannot be accurate.
  4. Inconsistent data – where values of the identical column in multiple datastores are not the same.
  5. Coverage limitations – when some data sources contain only a geographic or date range subset of the data.
  6. Misleading patterns – values across many rows that are likely erroneous.
  7. Unlikely outliers – values or rows that are not interesting aberrations but are more likely the result of problems in the data acquisition process.

Data profiling results are used to plan the work to improve the quality and completeness of the data.

Data profiling is too often performed by simple visual inspection of the data by experts in the data. This approach is tedious, expensive and the results can be uneven. Data profiling can be performed more quickly and consistently using simple SQL queries and data visualization software to highlight and quantify data problems for resolution.

If your AI project team appears to be stuck in endless data profiling work, then the team needs to focus on how to enhance the automation of this step.

Structuring data

Structuring the data includes the following tasks:

  1. Change values in all date columns to a common data type and date format.
  2. Change values in key columns to the agreed format. This action often means reordering values in compound keys.
  3. Change the same column in all selected data sources to the same name and data type.
  4. Change values in numeric columns to conform to the standard for real or scientific notation and for the number of decimal places.
  5. Recalculate values that don’t conform to the unit of measure standards for the organization.
  6. Parse values from unstructured text columns into new columns.

The benefit of structuring the data includes minimizing the risk that:

  1. Data scientists will inadvertently add incompatible numeric values together that first need to be recalculated.
  2. Data scientists will perform date arithmetic on incompatible date values.
  3. Confusion about column definition and contents will produce misleading results.
  4. Joins will produce nonsensical results or fail when they should not.

Structuring the data is too complex to perform manually. Data preparation software can handle most of the structuring. Custom SQL software is typically developed for the rest.

If your AI project team appears to be struggling to complete data structuring, often the problem is that the team is immersed in endless debates about ideal data standards. As a data analyst, you can remind them that project schedule is important and that compromises are acceptable within the project scope.

Cleaning data

Cleaning up the data corrects the problems that the data profiling step identified. This step is traditionally the most time-consuming part of the data preparation process. Data clean up tasks implement master data management concepts. Major tasks include:

  1. Filling in missing values.
  2. Correcting invalid data and relationships.
  3. Selecting the best value when inconsistent data exists.
  4. Removing duplicate, extraneous and outlier data rows.
  5. Masking private or sensitive values.

The benefit of cleaning up the data is that it ensures high confidence, reproducible insights from data analysis.
The following software are typically used where possible to automate the clean-up:

  1. Data preparation.
  2. Custom SQL software.
  3. A user-assisted RPA bot for trickier or ambiguous cases.

The remaining data clean-up, that is not amenable to automation, is performed manually by data experts. However, organizing the work into these steps and applying software dramatically reduces the effort and improves the data quality.

The clean-up results can introduce a risk of challenge to the data analytics conclusions. It’s important that the software used for this task produce a detailed audit trail of changes made for manual review.

Manual review of clean-up results by data experts will increase the assurance that the data analytics conclusions are credible.

If your AI project team appears to be daunted by the amount of data cleanup work, you can help the team to prioritize the work and back off self-imposed perfection.

Validating data

Once data has been cleaned up, it must be validated by testing for errors introduced or missed by the data preparation process up to this point. Validation is typically used to:

  1. Confirm the accuracy and completeness of the data preparation work.
  2. Identify more subtle, more complex, or more hidden problems in the data.

The benefit of validating the data is that previously unrecognized errors in the data and in the design of the data integration will become apparent and can then be corrected.

Validation is typically performed quickly and consistently using data visualization software and ETL software to highlight data integration problems for resolution by data experts.

If your AI project team insists that all the errors identified during validation must address before the AI project can move forward, then help the team to prioritize the errors and back off self-imposed over-validation.

Enriching data

Enriching data adds value to the data in the datastores. Data can be enriched in a number of ways including:

  1. Integrating tables of external data that supplement internal data.
  2. Performing code lookups and storing the retrieved values as a calculated column.
  3. Adding calculated columns and persisting them in the data.
  4. Creating new tables of aggregated data, sometimes known as data marts, where the keys to the summarization are predictable.
  5. Performing joins and persisting the results in new tables.

The benefit of this enrichment step is to greatly speed up subsequent data analytics processes. The cost to create the enriched data is typically a small fraction of the benefit to the many data analytics processes that are often repeated multiple times.

The following software are typically used to enrich data:

  1. ETL software is used to integrate the external data and create the aggregated tables.
  2. RPA bots can handle trickier cases where logic must be evaluated.
  3. Custom SQL software is typically developed for performing code lookups and joins.

If your AI project team has added too much scope to enriching data, then help the team to prioritize the ideas and leave some of the ideas for another day.

Now your AI project is ready for the data analytics step that produces the business value.

What strategies would you recommend that can reduce tedium, effort, and cost of data wrangling to clean datastore contents? Let us know in the comments below.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada
Yogi Schulz
Yogi Schulzhttp://www.corvelle.com
Yogi Schulz has over 40 years of Information Technology experience in various industries. Yogi works extensively in the petroleum industry to select and implement financial, production revenue accounting, land & contracts, and geotechnical systems. He manages projects that arise from changes in business requirements, from the need to leverage technology opportunities and from mergers. His specialties include IT strategy, web strategy, and systems project management.

Featured Download

IT World Canada in your inbox

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Latest Blogs

Senior Contributor Spotlight