Exploring the ML Tooling Landscape (Part 3 of 3)

Read my latest post on my Medium blog: Exploring the ML Tooling Landscape (Part 3 of 3)”, covering ML tooling and adoption in industry.

All the best,

Tom

Exploring the ML Tooling Landscape (Part 2 of 3)

My latest post is available to read at my Medium blog: Exploring the ML Tooling Landscape (Part 2 of 3)”, covering ML tooling and adoption in industry.

All the best,

Tom

Exploring the ML Tooling Landscape (Part 1 of 3)

My latest post is available to read at my Medium blog: Exploring the ML Tooling Landscape (Part 1 of 3)”, covering the current state of ML adoption in industry.

All the best,

Tom

Top 5 Mistakes Companies Make with Data Science

My latest post is available to read at my Medium blog: “Top 5 Mistakes Companies Make with Data Science”. What to avoid in your journey towards data-driven decision-making.

All the best,

Tom

Data Science Succes for Start-ups

My latest post is available to read at my Medium blog: “Data Science Success for Start-ups”. It’s about how to successfully plan for data science projects.

All the best,

Tom

Normalisation Techniques

Introduction

Feature normalisation is a key step in preprocessing data, either before or as part of the training process. In this blog post, I want to discuss the motivation behind normalisation and summaries some of the key techniques used. Most of the material from the blog is based on my reading of Google’s excellent “Data Preparation and Feature Engineering in ML” course.

This particular blog post fits into the wider topic of feature engineering or data transformation as a whole, which relates to other techniques such as bucketing and embedding as well. For the purpose of this blog post I will be conflating techniques which may otherwise be considered separately as standardisation and normalisation proper. The typical distinction between the two is detailed here.

The techniques to be considered are,

  • Scaling (to a range)
  • Feature clipping
  • Log scaling
  • Z-score

In all cases, it is important to visualise your data and explore the summary statistics to ensure the transformation applied is appropriate for the dataset considered.

Why Transform your Data?

In this blog post, we will only consider transformation of numerical features. In this case, the motivation behind normalisation is to ensure the values of a given feature are on a comparable scale. This is to ensure the data quality, and relatedly, better model performance and an accelerated training process. This may be required even in the case of certain gradient optimisers, which can handle the unnormalised data across different features, cannot necessarily handle a wide range of values for a single feature. The data transformation itself can either happen before training or within the model itself. The main tradeoff being that the former is performed in batch whereas the latter is performed per iteration. Deciding between these two approaches will also depend as to whether the model lives online or offline.

Scaling

This is simply mapping from the given numeric range of a feature to a standard range, typically between zero and one, inclusive. Achieve this by transforming using min-max scaling,

\[x'=\frac{x-x_{min}}{x_{max}-x_{min}}.\]

This transformation is particularly appropriate if the upper and lower bounds of the data are known, with few or no outliers, and the data is uniformly distributed.

Feature Clipping

Set all feature above or below a certain threshold to a chosen fixed value. This threshold (or thresholds) is an arbitrary number, in some cases it is taken to be a multiple of the standard deviation. May apply feature clipping before or after other normalisations, which is useful in the case of other transformations that assume there to be few outliers.

Log Scaling

This is appropriate when the distribution of datapoints follow a power law distribution. This transformation aids in applying linear modeling to the dataset. The base of the log is, generally speaking, not that important to the overall transformation.

Z-Score

Transform feature value in terms of the number of standard deviations away from the mean. The transformed distribution will have a mean of zero and standard deviation of one. It is desirable, but not necessary, that the feature values contain few outliers. The equation is as follows,

\[x'=\frac{x-\mu}{\sigma}.\]

All the best,

Tom

Hierarchical/Multi-level Indexing in Pandas

Introduction

A key stage in any data analysis procedure is to split the initial dataset into more meaningful groups, which can be achieved in Pandas using the DataFrame groupby() method. It can be more useful still to manipulate the returned DataFrame into more meaningful groups using hierarchical (interchangeably multi-level) indices. In this blog, we will go through Pandas DataFrames indices in general, how we can customise them, and typical instances when we might use hierarchical indices. The data used in the code examples in this article come from UK population estimates provided by the ONS.

DataFrame Indices

First things first, let’s get the initial dataset we’ll use for this article.

# Get initial DataFrame
url = 'https://www.nomisweb.co.uk/api/v01/dataset/NM_31_1.jsonstat.json'
data = jsonstat.from_url(url)
df = data.to_data_frame('geography')
df.reset_index(inplace=True)

The above will return a DataFrame containing a chronological record of population counts by UK region and year, as well as some basic demographic information such as sex and age group.

In general, whenever a new Pandas DataFrame is created, using for example the DataFrame constructor or reading from a file, a numerical index of integers is also created starting at zero and increasing in increments of one. By default, this is a RangeIndex object, as you can confirm by looking up the DataFrame’s index attribute,

df.index
#=> RangeIndex(start=0, stop=22800, step=1)

Pandas also supports setting custom indices from any number of the existing columns using the set_index() method. By specifying the keyword argument drop=False we make sure to retain the column after setting our custom index. Even after specifying a custom index on a DataFrame, we still retain the original integer index so can use either as a means of filtering or selecting from the DataFrame. DataFrame.iloc is used for integer-based indexing, whereas DataFrame.loc is used for label-based indexing. Ideally an index should be a unique and meaningful identifier for each row. This is precisely why we may choose to use multi-level indexing in the first place. Given a custom index, we can revert to the standard numerical index with DataFrame.reset_index(). For our purposes, this doesn’t make too much sense, but imagine having a collection of measurements for a set of unique datetimes: We could choose the datetime as our index.

Groupby & Hierarchical Indices

A more typical scenario where we would come across hierarchical indices is in the case of using the DataFrame.groupby function. Given our DataFrame above, imagine we wanted to find the breakdown of the latest population statistics by region and sex,

# This line is to find the population count by sex for the most recently recorded year i.e. df.data.max()
df = df[(df.date == df.date.max()) & (df.age == 'All ages') & (df.measures == 'Value')][['geography', 'sex','Value']]

# This is just to standardise the column names
df.columns.values[0] = 'region'
df.columns.values[-1] = 'value'

Calling groupby() on this DataFrame will allow us to group by the desired categorise for our analysis, which in this case will be the region and sex. We want to find the sum total populations conditioning for region and sex,

df_grouped  = df.groupby(['region', 'sex']).sum()

# | region            | sex    | value      |
# | ----------------- | ------ | ---------- |
# | England and Wales | Female | 29900600.0 |
# |                   | Male   | 29215300.0 |
# |                   | Total  | 59115800.0 |
# | Northern Ireland  | Female | 955400.0   |
# |                   | Male   | 926200.0   |
# |                   | Total  | 1881600.0  |
# | Scotland          | Female | 2789300.0  |
# |                   | Male   | 2648800.0  |
# |                   | Total  | 5438100.0  |
# | Wales             | Female | 1591300.0  |
# |                   | Male   | 1547300.0  |
# |                   | Total  | 3138600.0  |

The index of the DataFrame df_grouped will be a hierarchical index, with each “region” index containing multiple “sex” indices. We can confirm this by looking up the index attribute of df_grouped,

df_grouped.index

# => MultiIndex(levels=[['England and Wales', 'Northern Ireland', 'Scotland', 'Wales'], ['Female', 'Male', 'Total']], codes=[[0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3], [0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2]],    names=['region', 'sex'])

We may instead want to swap the levels of the hierarchical index so that each sex index contains multiple region indices. To do this, call swaplevel() on the DataFrame.

df_grouped.swaplevel().index

# => MultiIndex(levels=[['Female', 'Male', 'Total'], ['England and Wales', 'Northern Ireland', 'Scotland', 'Wales']],
           codes=[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [0, 0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3]],
           names=['sex', 'region'])

For presentational purposes, it is useful to pivot one of the hierarchical indices. Pandas unstack() method pivots a level of the hierarchical indices, to produce a DataFrame with a new level of columns labels corresponding to the pivoted index labels. By default, unstack() pivots on the innermost index.

df_grouped.unstack('sex')

#                     | value                                |
# | sex               | Female     | Male       | Total      |
# | region            | ---------- | ---------- | ---------- |
# | England and Wales | 29900600.0 | 29215300.0 | 59115800.0 |
# | Northern Ireland  | 955400.0   | 926200.0   | 1881600.0  |
# | Scotland          | 2789300.0  | 2648800.0  | 5438100.0  |
# | Wales             | 1591300.0  | 1547300.0  | 3138600.0  |

The resulting DataFrame of this “unstacking” will no longer have a hierarchical index,

df_grouped.unstack('sex').index

#=> Index(['England and Wales', 'Northern Ireland', 'Scotland', 'Wales'], dtype='object', name='region')

All the best,

Tom

Investigating Attendee Reviews

My latest post is available to read on Skills Matter’s Medium blog: “Investigating Attendee Reviews”. It’s a look at Skills Matter’s reviews app.

All the best,

Tom

Shining a Light on Black Box Models with LIME

Machine learning models are increasingly the primary means with which we both interact with our data and draw conclusions. However, these models are typically highly complex and difficult to debug. This characteristic of machine learning models leads to them frequently referred to as “black boxes” as there is often very little transparency - or at least very little upfront - of how the input data links with the output the model produces. This is much more of a problem for more sophisticated models, as it is generally accepted that more sophisticated models are equally more intractable.

The LIME project aims to address this issue. I have only recently been introduced to LIME by the course Data Science For Business by Matt Dancho, but it has definitely piqued my interests and opened up an entire area of research that I was previously unaware of. In this post, I want to go through the main motivating factors of the project and review the theory of how it works. Although the project has evolved somewhat from its inception, as I will not be going into code at this stage, the general discussion of the post should still be valid. This is a very interesting topic to me and I will want to return to it in the future, so this post should provide a strong foundation for future blog posts I will write on this topic. The best place to find out more about this is one of the original papers “Why Should I Trust You?”: Explaining the Predictions of Any Classifier, which I’m sure we can agree is one the best named academic papers out there.

Why Clarity is Important

The basic way of interacting with a model is that we either directly or indirectly provide some input data and obtain some output. Especially in the case of the non-specialist, they may have no real understanding of either the link between the input data given to a model and the output it provides, and how the one model compares to another. As a data scientist, you may be able to draw some comparison between models during the test phase of model development such as by using an accuracy metric. However such metrics have their own problems: In general they do not capture the actual metrics of interests we want to optimise for i.e. actual business metrics, and do not, on their own at least, indicate why a model’s output may be less suitable than another, for instance a better performing model may more complicated to debug.

Taken together, the above points relate to issues of the trust placed upon the model. These trust issues relate to two key areas: (1) Can I trust the individual predictions made by the model? (2) Can I trust the behaviour of the model in production? To take this a bit further still, in a world increasingly aware of the practical implications of machine learning models, and especially so now that GDPR has come into action, we can no longer deny the ethical questions surrounding machine learning. It behoves use to understand how the model operates internally. A key step to resolving these issues is to grant a more intuitive understanding of the models behaviour.

As detailed in the original paper, experiments show how human subjects can successfully use the LIME library to choose better performing models and even go on to improve their performance. The key principle to this is how LIME generates local “explanations” of the model, which characterise sub-regions of the model’s original parameter space.

Unpacking “LIME”

The name LIME stands for, Local, Interpretable, Model-Agnostic, Explanations and can really be understood as the mission-statement of the package: The LIME algorithm wants to produce explanations of the predictions of any model (hence model-agnostic). By explanation, we mean something that provides a qualitative understanding between an instance’s components and the model’s predictions. A common way to do this in LIME is to use a bar chart to indicate the individual model components and the the degree to which they support or contradict the predicted class.

Localness has another quality associated with it: the local explanations must have fidelity to the predictions obtained from the original model. That is, the explanation should match the prediction of the original model in that local region as closely as it can. Unfortunately, this brings it into conflict with the need that these explanations be interpretable, that is, representations that are understandable to humans regardless of the underlying features. For instance, imagine a text classification task with a binary vector as output, regardless of the number of input features. By feeding more features into the model of the local explanation, we could anticipate greater accuracy with respect to the original model. However, this would add greater complexity to the final explanation as there will be that many more features contributing to the explanation.

In other words, LIME aims to programmatically find local and understandable approximations to a model, which are as faithful to the original model as possible. Such simplified models around single observations are generally referred to as “surrogate” models in the literature: surrogate, meaning simple models created to explain a more complex model.

The gold-standard for explainable models is something which is linear and monotonic. In turn these mean,

  • Linear: a model where the expected output is just a weighted sum of inputs, possibly with a constant additive term,
\[f(x) = \sum_{i\in n} x_i + c.\]
  • Monotonic: for instance in the case of a monotonically increasing function, the output only increases with increasing input values,
\[f(x_j) > f(x_i) \iff x_j > x_i.\]

This is precisely what LIME tries to do by finding linear models to model local behaviour of the target model.

In order to find these local explanations, LIME proceeds using the following algorithm,

  1. Given an observation, permute it to create replicated feature data with slight value modifications. (This replica set is a set of instances sampled following a uniform distribution)
  2. Compute similarity distance measure between original observation and permuted observations.
  3. Apply selected machine learning model to predict outcomes of permuted data.
  4. Select m number of features to best describe predicted outcomes.
  5. Fit a simple model to the permuted data, explaining the complex model outcome with m features from the permuted data weighted by its similarity to the original observation .
  6. Use the resulting feature weights to explain local behaviour.

(The above steps are taken from the article “Visualizing ML Models with LIME” by “UC Business Analytics R Programming Guide”.)

The main benefit of this approach is its robustness: local explanations will still be locally faithful even for globally nonlinear models. The output of this algorithm is what is referred to as an “explanation” matrix, which has rows equal to the number of instances sampled, and columns for each feature. At this stage, the matrix produced is sufficient to provide local explanations in whatever form is deemed appropriate. In the next step, this matrix is used to characterise the global behaviour of a model.

Going Global

What about global understanding of the model? To achieve this, LIME picks a set of non-redundant instances derived from the instances sampled in the previous step, following an algorithm termed “submodular pick” or SP-LIME for short.

Once we have the explanation matrix from the previous step above, we need to derive the feature importance vector. The components of this vector give the global importance of the each of the features from the explanation matrix. The precise function mapping the explanation matrix to the importance vector depends on the the model under investigation, but in all cases should return higher values (meaning greater importance) for features that explain more instances i.e. features found across more instances globally.

Finally, SP-LIME only wants to find the minimal set of instances, such that there is no redundancy in the final returned set of instances. Non-redundant, meaning that the set of local explanations found should cover the maximum number of model features with little, if any, overlap amongst the features each individual local explanation relates to. The minimal set is chosen by a greedy approach that must satisfy a constraint that relates to the number of instances a human subject is willing to review.

In short, the approach taken by LIME is to provide a sufficient number of local explanations to explain the distinct behavioural regions of the model. This sacrifices the global fidelity of explanations in favour of local fidelity. As discussed in the original paper, this leads to better results in testing with both simulated and human users. In general though, while global interpretability can be approximate or based on average values, local interpretability can be more accurate than global explanations.

Closing Remarks

I want to return to LIME and the wider topic of machine learning interpretability in future blog posts, including how this works with H2O as well as being able to provide a more in-depth technical run-through of the library.

All the best,

Tom

The Why of Data-Driven Organisations

Something a bit different! My latest blog post is from the transcript for my lightning talk I will deliver at Infiniteconf 2018 in July. This is available to read now on medium. See you there!

All the best,

Tom