Getting a Grasp on Data Science: David Donoho’s Six-Part Definition

The discipline of data science is notoriously difficult to define, and yet it is perhaps not impossible. I am currently working to gear more of my thought and instruction around Stanford statistician David Donoho’s definition of Greater Data Science (GDS). Here I’ll provide a brief summary of three key contributions of Donoho’s argument, ending with his six-part definition of the discipline.

First, Donoho appropriately connects the current expansive discipline of Data Science to its roots in 50+ years of work by statisticians, beginning with John Tukey in the 1960s. On Donoho’s telling, contemporary Data Science should understand itself not in opposition to the discipline of statistics but as an outgrowth and extension of the long tradition of statistics. More than historical recognition, this extends to appreciating the contemporary relevance of both traditional statistical analysis and predictive modeling with machine learning. Just as traditional statisticians must acknowledge and embrace the importance and relevance of contemporary machine learning methods, “cutting edge” data scientists must appropriately recognize the continued role and relevance of traditional statistical analysis.

Second, Donoho helpfully identifies the Common Task Framework methodology that undergirds the successes of contemporary predictive modeling. This methodology includes (a) a publicly available training dataset, (b) a competitive multi-party approach to predictive modeling, and (c) a scoring referee or system for evaluating the competing models against a test dataset unavailable to the competitors. He cites the Netflix Challenge as a famous example of this approach.

Third, and most importantly, Donoho builds on the work of John Chambers and Bill Cleveland to outline a definition of Greater Data Science (GDS), which includes the following six sub-fields:

  1. Data Exploration and Preparation
  2. Data Representation and Transformation
  3. Computing with Data
  4. Data Modeling
  5. Data Visualization and Presentation
  6. Science about Data Science

This definition is so apt that it seems common sense to those who practice in the field. But the complexity of the work involved in data science has meant that reaching the clarity offered by this definition has not been easy. Not content simply to define, Donoho devotes the remainder of his piece to discussing and illustrating some of the key practices included under each sub-field. I will number each sub-field as he does, using GDS for Greater Data Science.

GDS1: Data Exploration and Preparation. Frequently requiring upwards of 80% of the work involved in data science, this sub-field is too often neglected in the teaching of data science and merits greater attention in the future. It includes the many steps of curating data, dealing with anomalies, and pulling into the shape needed for analysis.

GDS2: Data Representation and Transformation. This includes the problem of data storage and requires that a data science be fluent in current database technologies. As of 2021 that includes SQL and NoSQL databases, distributed (cloud) systems, etc.

GDS3: Computing with Data. This includes necessary knowledge of languages like Python or R and related current software used in preparation, analysis, and modeling, as well as understanding of the workflows used to in the development of an analytical process. 

GDS4: Data Visualization and Presentation. This sub-field addresses the importance of visual analysis methods, from standard plots used in Exploratory Data Analysis (EDA) to advanced charts used to crystalize understanding of specific important features to interactive data dashboards.

GDS5: Data Modeling. This sub-field should rightfully include both traditional statistical approaches and contemporary predictive modeling with machine learning.

GDS6: Science about Data Science. Key to making data science a true science, science of data science investigates the real-world work of data scientists “in the wild” and contributes to the documentation, description, analysis, and evaluation of those real-world practices, with the express aim of discerning the more fruitful practices that show merit for leading the discipline of data science forward to greater promise and productivity.

Given the sheer complexity of data science and the astounding speed of its ongoing development, it is difficult to overstate the value of Donoho’s six-part definition of the discipline. For myself, I will be contemplating and engaging the implications of this definition for months and years to come.

Allow me simply to recommend Donoho’s article as a read of incredible value, and recommend his helpful discussions of these six sub-fields as a beginning point for others as we work together to take the discipline forward.

 

SettingWithCopyWarning? Try using .copy()

What’s the deal with the SettingWithCopyWarning?

You may have noticed this popping up on occasion, usually with a pink background:

/opt/conda/lib/python3.7/site-packages/ipykernel_launcher.py:6: SettingWithCopyWarning: 
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy

This warning can be a strange one, since it can crop up unexpectedly and sometimes seems (or is) nearly random when it does. Indeed, it is especially confounding when it happens even when we are using using the .loc accessor.

Now, the good thing about it is that it is only a warning. The operation you desired to perform most likely worked just fine. But you may be tired of the pink hued warning cropping up all the time. And in certain circumstances there’s a chance that things may not work as they should.

Here’s a way to grasp the problem and fix it.

What’s Happening

The culprit is typically in a prior step. In my recent experience, these steps have sometimes resulted in the behavior occurring a few steps later:

df = df[['title','date','budget','revenue']]

Or:

df = df[df['budget'] > 0]

It seems rather simple: I want to update the dataframe itself so that it has fewer columns or only a filtered set of records. And I’m overwriting the original dataframe with the new, assigning it to become the new df.

And then, at a later step, I sometimes start getting the dreaded SettingWithCopyWarning.

Why is this Happening?

Under certain circumstances, when we update a dataframe and save over the original variable, pandas stores this as a *view* of the original dataframe. In pandas memory, it retains a connection to the dataframe as it was before. Thus this view is, in the words of the warning, “a copy of a slice” of the original dataframe.

What to Do About It

Here’s a quick and effective way to deal with it. When you store a new version of the dataframe to a variable, chain the .copy() method on the end of the operation. This severs the connection to the original dataframe and makes it an entirely new object.

For example:

df = df[['title','date','budget','revenue']].copy()

Or:

df = df[df['budget'] > 0].copy()

When we use .copy(), this forces pandas to wipe the old dataframe from memory and re-assign df as an entirely new dataframe, with no connection to a prior version.

Operations you perform after that point should no longer provoke the dreaded SettingWithCopyWarning.

Try it for yourself. It should help!

References

Statistics for Data Science

Statistical knowledge is immensely valuable to our work in data science. Indeed, the field of statistics has helped shape the realities we work in, including the software tools and algorithms we have available. Those with deep statistical knowledge play key roles in shaping the future of the field.

However, the individual practitioner in Data Science need not have a PhD in statistics or mathematics to be successful. Indeed, our everyday use of statistics proper is often strategic, empowered by software, and requires more of an intuitional grasp of key statistical concepts than deep knowledge.

As Vincent Granville wrote in 2014:

Data science barely uses statistical science and techniques.

He goes on to clarify:

The truth is actually more nuanced …

In the ensuing post he lists a series of new statistical concepts that are frequently useful in data science, followed by a series of old statistical concepts that are also often useful.

Then he follows up with this sentence:

From a typical 600-pages textbook on statistics, about 20 pages are relevant to data science, and these 20 pages can be compressed in 0.25 page.

Granville’s post is worth a read, as he goes into some reasons why old-school statistics proper is increasingly less useful in the world most of us live and work in, while machine learning techniques are becoming much more useful.

And I will add: Granville’s post, while framed somewhat controversially, fairly well summarizes the realities of data science work. There are several statistical concepts that are indeed useful when doing this work. But we have powerful software tools ready at hand to do much of that work — often using new techniques that yield better predictive results than older statistical approaches do.

Yes, we still often need to understand the meaning and implications of a range of statistical insights in relation to our data. But we can get lots of great work done with an intuitional understanding of those concepts. Thus, we can start with statistical fundamentals, use them as needed, and then expand our knowledge when the situation calls for it.

One last sentence from Granville summarizes this data-sciencey attitude toward stats:

I believe that you can explain the concept of random variable and distribution (at least what you need to understand to practice data science) in about 4 lines, rather than 150 pages. The idea is to explain it in plain English with a few examples.

Granville expressed an intention to draft a “statistics cheat sheet for data scientists,” and do it in a single page. If he ever wrote that, I’ve not found it.  Turns out he wrote a Machine Learning Cheat Sheet that covers many data sciencey things, but not statistics.

Meanwhile, in 2017, O’Reilly published a nice handbook of 318 well-organized, succinct and readable pages to fill the gap: Practical Statistics for Data Scientists, by Peter Bruce and Andrew Bruce. I recommend it:

Practical Statistics for Data Science cover

For most of us, a work like this does a great job of bridging the gap for those who are coming to data science from a variety of fields.

Featured

Leveraging the narrative arc to inspire data-driven action

Great visual analytics involves a sequence of steps which may be understood as both a science and an art. That sequence includes:

  1. Understanding the business objectives. These drive and guide the analysis and provide it point and purpose.
  2. Spending the time required to understand the data inside and out. (Exploratory analysis.)
  3. Identifying and curating the most important insights, to prepare for explanatory analysis.
  4. Refining the design for clear, effective, and efficient communication, reducing clutter and highlighting key data points.
  5. Providing visual hierarchy, to draw attention to first things first, second things second, and so on.
  6. Structuring the report to provide the right mix of breadth and depth — breadth so that the stakeholders can see the big picture, and depth so that they can’t miss what’s most important. 
  7. When the occasion calls, leveraging elements of a good narrative, to lead the audience along a progression of steps from attention to recognition to engagement and finally to action.

Each of these steps merits its own extended discussion. In this post, I’d like to draw attention to the seventh step: leveraging elements of good narrative to lead the audience to action.

To my knowledge, no one has discussed this more effectively than Cole Nussbaumer Knaflic. Her book, Storytelling with Data, is the leading book on the topic, hitting many of the preceding steps, and then driving on to leverage the power of narrative in the presentation of the story.

Among the narrative elements she discusses are:

  • Establishing the setting, as a reminder of what we’re doing here and the shared goals we have.
  • Highlighting the problem as a tension between current obstacles and desired outcomes.
  • Viewing your audience as protagonists, whose actions will drive the story forward.
  • Taking a role in the story yourself, recommending possible courses of action, provoking your participants to engagement and leading toward resolution — i.e., data-driven action.

Her irrefutable point: You may have the best, most insightful, most beautifully designed analysis. But if you fail to effectively communicate that analysis, its sum total value is exactly ZERO. For at the end of the day, the sole point and purpose of analysis is to inform and generate action.

This is where Knaflic’s work is so valuable. If you’re short on time to read a book, Knaflic has presented this seventh point in the form of an entertaining and informative short video. Indeed, in this video she takes the discussion a step further to discuss the transformative power of the narrative arc:

  • Plot
  • Rising Action
  • Climax / Tension
  • Falling Action
  • Resolution / Ending

Only 15 minutes in length, her presentation is work of art. 

Properly received, her presentation should provoke you, as a data professional, to put the lesson into practice. May your future presentations be more focused, more meaningful, and much more effective at inspiring data-driven action.

Getting Started with Tableau Desktop

If you’re fairly new to Tableau, chances are you’ll find Tableau’s repository of free training videos (free with registration) to be very helpful. Indeed, there’s enough there to help you go from zero to serious just about as fast as you dare to do it. The tutorials are really pretty great.

BUT their organization scheme needs a little help.

Here’s a helpful list of the best, most useful videos to get started with. Once you’ve worked through those, check out my page with a more extensive organized index including Tableau’s more in-depth training videos.

Tableau Fundamentals

The following seven videos will get you up and running quickly.

From the Getting Started section

  1. Getting Started (25 min)
    A little lengthy, but does a great job of giving an overview of what’s possible in Tableau.
  2. The Tableau Interface (4 min)

From the Connecting to Data section

  1. Getting Started with Data (6 min) 
  2. Managing Extracts (4 min)

From the Visual Analytics section

  1. Getting Started with Visual Analytics (6 min)

From the Why is Tableau Doing That? section

  1. Understanding Pill Types (5 min) 
  2. Measure Names and Measure Values (5 min)

From this point forward, the best path depends on your needs. 

Check out my page with a more extensive organized index including Tableau’s more in-depth training videos.

On the Differences between Statistics and Machine Learning

In his post, The Actual Difference Between Statistics and Machine Learning, Matthew Stewart helpfully explains how statistical analysis differs from machine learning. Data science is still a larger than machine learning. But it’s very appropriate to say something very similar about the relationship between the two as we said above: Data science can’t be done without machine learning.

Both statistics and machine learning are part and parcel of the data science toolkit. And each plays a somewhat different role. Explaining the difference is helpful.

Stewart summarizes the difference like this:

  • Statistical modeling aims first and foremost for understanding and explaining relationships between variables. Predictive power is a secondary consideration.
  • Machine learning aims first and foremost for effective prediction. Some machine learning algorithms are easy to interpret, and some are not.

Thus, if you are writing a scientific paper that needs to explain the relationships between variables, statistical modeling is probably the best route.

However, if the point of your work is to produce actionable results that translate into greater efficiency and effectiveness achieving the mission of your organization — machine learning is often the better route.

In Stewart’s own words:

Machine learning is all about results, it is likely working in a company where your worth is characterized solely by your performance. Whereas, statistical modeling is more about finding relationships between variables and the significance of those relationships, whilst also catering for prediction.

He goes further to develop a helpful analogy:

By day, I am an environmental scientist and I work primarily with sensor data. If I am trying to prove that a sensor is able to respond to a certain kind of stimuli (such as a concentration of a gas), then I would use a statistical model to determine whether the signal response is statistically significant. I would try to understand this relationship and test for its repeatability so that I can accurately characterize the sensor response and make inferences based on this data. Some things I might test are whether the response is, in fact, linear, whether the response can be attributed to the gas concentration and not random noise in the sensor, etc.

Statistical analysis is great in such a case. It’s the right tool for the job.

But what if the nature of the problem is slightly different, and the goals are different?

In contrast, I can also get an array of 20 different sensors, and I can use this to try and predict the response of my newly characterized sensor. This may seem a bit strange if you do not know much about sensors, but this is currently an important area of environmental science. A model with 20 different variables predicting the outcome of my sensor is clearly all about prediction, and I do not expect it to be particularly interpretable. This model would likely be something a bit more esoteric like a neural network due to non-linearities arising from chemical kinetics and the relationship between physical variables and gas concentrations. I would like the model to make sense, but as long as I can make accurate predictions I would be pretty happy.

That brings it home nicely. In the case of machine learning, our interest is in the results: How can we make the most accurate predictions? And moreover, do these predictions yield benefits for the mission of our organization?

Perhaps said otherwise, statistics is more about understanding — helping to answer the question, What’s really happening here? Machine learning is more about driving action — helping to answer the question, What can we anticipate next? — and by extension enabling efficient and effective responses.

Tom Khabaza’s Nine Laws of Data Mining

Those who work in data mining or predictive analytics are familiar with the CRISP-DM process. Metaphorically, if not literally, that process description is taped to our wall. Tom Khabaza’s Nine Laws of Data Mining should be taped up right next to it.

Khabaza has published those laws as a series of blog posts, here. For each law, he has provided a short name, followed by a one-sentence summary, supported by a few paragraphs of explanation.

The value of these laws is that they help prepare us for what to expect as we do the work — and then they remind us of what we should have expected if we occasionally forget!

As I am a fan of brevity, I’m creating this post as a list of the single-sentence summaries. Occasionally I’ll add a short clarifying note. Here they are:

Tom Khabaza’s Nine Laws of Data Mining

  1. Business objectives are the origin of every data mining solution.
  2. Business knowledge is central to every step of the data mining process.
  3. Data preparation is more than half of every data mining process.
  4. The right model for a given application can only be discovered by experiment (aka “There is No Free Lunch for the Data Miner” NFL-DM).
  5. There are always patterns (aka “Watkin’s Law).
  6. Data mining amplifies perception in the business domain.
  7. Prediction increases information locally by generalization.
  8. The value of data mining results is not determined by the accuracy or stability of predictive models. (Rather, their value is found in more effective action and improved business strategy.)
  9. All patterns are subject to change. (Thus, data mining is not a once-and-done kind of undertaking.)

These laws, as Khabaza points out, are not telling us what we should do. Rather they are “simple truths,” describing brute facts that give shape to the landscape in which data mining is done. Their truth is empirical, discovered and verified by those who’ve been doing the work. So it’s best to keep these truths in mind and adapt our efforts accordingly, lest we pay the price for failing to acknowledge reality as it is.

If you’re intrigued, and want to read further, view Khabaza’s full post here. His exposition of these points is more than worth the time!