Tuesday, 15 July 2014

The quiet BI revolution (part one)

Three years ago on Wallpapering Fog, I wrote a post about why our company (or more precisely, since the company's huge, my department) had adopted Tableau software.

At the time, I said:

"I feel like I'm giving away a trade secret here, but what the hell, you're going to hear about it from somewhere soon anyway."

Having just attended the London Tableau Conference, I can confirm that the secret is well and truly out. It was a brilliant event, brimming with enthusiastic people and great ideas, that deserves its own write-up away from this post.

For this post, I'd like to indulge in one of my occasional crystal ball gazes and look at the future of Business Intelligence (BI). Not BI on the cutting edge - although that is an exciting topic - but BI in regular businesses. Businesses that have small analytics teams, no time and aren't PR'ing a project to the trade press, with all of the doubts and the dirty laundry Tippexed out.

So where is BI - and in particular, regular reporting - for a normal analytics team going to head over the next five to ten years?


1. Data Visualisation and Reporting

Data vis as it applies to most businesses, is now a solved problem (what to visualise isn't. That's part two of this post). You can have good looking reports, automatically refreshed and delivered onto any device you like and even on paper, if you must. They're quick to build, easy to adapt and easy to maintain - more so than Excel-based reports ever were and much more flexible.



The only things you can't do easily, are weird and wonderful innovative visuals that nobody's ever seen before and you can't have all of this functionality for free.

On the first of these problems, I'd argue that this isn't a business issue. Businesses need straightforward charts, tables and standard reports, not animated 3D network diagrams, so software like Tableau will do a great job. I'd also argue that if you're looking for real flexibility, Lyra is something that I'm quite excited about...

On the second problem - cost - you just have to bite the bullet. $20,000 spent on the right BI software will transform your analytics department.

(That's if you give the $20k to your analytics department. DO NOT give it to a centralised IT team. They'll very likely ask for another $230k to make a nice round number, disappear for six months and then reappear asking for more money.)

The real change in data reporting, investigation and visualisation over the next five years or so, is going to be from a situation where many businesses don't yet realise that it's a solved problem, to one where they do.

Tableau's solved this problem and in my opinion is by some distance the best of the new breed of reporting and investigation tools, but if it hadn't been Tableau it would have been Qlik View. And if not them, Spotfire. And... you get the point.

What's going to happen over the next few years is that Tableau knowledge will become more valuable - because more businesses will want to hire those skills - and also less valuable, because loads more people are going to know how to use the software. The end result is basic supply and demand. It might swing back and forth for a bit, but we'll settle onto a situation where many (most?) analysts know Tableau as a regular part of their job. There'll be specialists, just like there are specialist Excel consultants, but most businesses will sort themselves out and nobody will be paid a fortune just for knowing how to use Tableau.


So far, no real surprises and if you read Wallpapering Fog regularly then you've probably heard those ideas before. The next two points are where I see a quiet revolution happening.


2. (not) Data Warehousing

You probably already know how this works. Analysts with Tableau do the visuals, but there's a big SQL database in the back end, looked after by a centralised IT team, which contains exactly 73% of what you want to visualise. A big enough gap that you can't just ignore data that isn't in the data warehouse, but not so big that the data warehouse as it stands is useless.

What often happens in response to an incomplete data warehouse, is that analysts build a hack. The data that isn't centralised is pulled in from ad-hoc spreadsheets and mashed together in Excel or Tableau, which works OK until you need more than a couple of people to update those spreadsheets, or somebody's on holiday. This is the issue we often hit in media agencies; you can solve a problem once, but can't roll out the solution everywhere to all clients because some parts of your 'solution' are held together with gaffer tape and bits of string.

What's needed is some software that's built for analysts and allows them to merge different data sources and to schedule updates, without recourse to a database administrator.

If you were at the Tableau Conference last week, then you'll have seen Alteryx sat squarely in this area. Drag-and-drop, hugely flexible and very friendly, I played with the demo a few months ago and I loved it.

But, it is quite pricey. Especially if, like us, you wouldn't plan on using all of Alteryx's capabilities and are only really interested in blending data sources together.

Did somebody say what about Open Source? Here's my tip of the day. Go and download the Community Edition of Pentaho Kettle and persevere through the thirty minute skirmish it will take you to get it installed and working properly. Your reward will be drag and drop data acquisition, blending and output, all for free. This is how I process a lot of my football data and it's brilliant.



In terms of crystal ball gazing, the analytics department now starts to look quite different. It's running a lot of reports on schedules, freeing up time for investigation and innovation. Nobody does the whole "getting into work at 7am on Monday for a frantic three hours of board report running" any more, which retailers in particular are very fond of. And thank God for that.

In our new world, IT only handles data when it needs to flow in large volumes from a point-of-sale or distribution system. IT does the bit that it already does very well now, but everybody stops moaning that the data warehouse doesn't also contain lots of the smaller user-maintained pieces of information that make a business run properly.

If you're thinking that the new world sounds like the same old BI promises, then you're right, it does. We should have been able to do these things ages ago but it didn't work due to the disconnect between analysts and IT and the slow build time, inflexibility and high cost of software. Analysts received questions and understood what output was needed, but usually only IT had the (inflexible) technology to make that output happen automatically.

The big differences now are speed, cost, flexibility and the number of companies that will be working in this new way. It's no exaggeration to say that you're able to go from raw data, to first-version business reports in two days. You can pin those down to a format everybody's happy with in a couple of months (faster if you make decisions quickly) and then you can fully automate them. Reports are able to evolve because they can be rebuilt and republished very quickly, in hours rather than weeks.

Then what do you do next? It's a serious question with which some reporting teams are going to struggle. When nobody needs you to move data from Google Analytics to Excel and chart the same charts every week, what will you do? The time to start thinking about that is now.


3. Data acquisition

This one's not solved; it's currently being solved and we've got a little way to go yet. Data acquisition is the last barrier between analysts, managers and an automated dashboard containing absolutely everything on which they wish to report.

Alteryx and Pentaho Kettle are fantastic data assembly (ETL) tools, provided your data isn't stored somewhere really stupid. Unfortunately, I work in marketing and our industry specialises in making data as difficult as possible to access.

- It's in untidy, bespoke web interfaces, behind login screens.

- It's in the colour key that somebody has chosen to fill cells in Excel

- It's emailed across, with a friendly "Hello! Hope you had a good weekend. Today's spend number is £2,486."


Database that, smartarse.


What I see happening over the next few years is some new tools and some new ways of working. Provided data is delivered in a consistent format, then the likes of Alteryx or Kettle can make the data acquisition and blending problem go away.

Where data is in web interfaces, we can already scrape it using Python or R, but then you need an analyst who knows how to scrape and that's not such a common skill-set. (Top tip: look for a football analyst - by necessity, we're getting quite good at it.)

We're going to evolve towards XML and other data feeds in addition to the usual user facing tables that come from the majority of web data sources, which again brings the likes of Alteryx into play. The data providers who don't do this should gradually become extinct through a process of natural selection.

Eventually, these changes will form an almost universal API. Every provider's data is different, but you'll be able to get to the data in an automated way and that's 90% of the battle. When you've done that, you only need to solve the data transformation problem once.

We'll also see - as is happening already - advanced data providers like Datasift starting to deliver information into services such as Google's Cloud Platform. A few years ago this wouldn't have helped, because you're just swapping one API for another, but when a critical mass of services all use that same cloud, easy connectors start to appear.

So why do I say that data acquisition isn't a solved problem yet?

Well for one, too many sources are still silos, but a second issue is that user input is still much too difficult. There's no Tableau for manual data entry and we still have to call a developer to create web forms and database schemas and data validation and to link it all together for us. Either that, or we have a central spreadsheet for people to fill in and we pray that they don't break it, or all try to edit it simultaneously.

I'm sure this software will come, but I haven't yet seen it. Microsoft Access forms and VBA really isn't it and neither are Google Forms. Microsoft, for all that they had a massive head start and will claim to have solutions to all of these problems, are nowhere in the BI race and are falling further behind.

If you've seen another solution to the problem of regularly taking validated user input without embarking on a software build or trying to lock down a spreadsheet, I'd love to hear about it in the comments.


The future's bright

In our future analytics department a lot has changed, but it's been a quiet revolution. A lot of things that were difficult are now easy and the business analyst's scope has extended well into traditional IT territory. Or, more accurately, that territory is more clearly delineated between the two departments and issues which neither IT nor analysts could previously solve (for a sensible budget in a sensible time-frame), have been dealt with.

Reports have moved to web browser interfaces - except for those staff who absolutely insist that they need printed ones - and automation takes care of putting them together. Analysts can quickly and visually interrogate their data and as an aside, Excel has moved to being a secondary tool for serious analysts, behind Tableau (or a competitor of your choice).

We were promised all of this a long, long time ago. Most businesses might actually get there in the next five years or so. It's interesting that the process of assembling Business Intelligence is being solved backwards... Rather than from data collection, to merge, to visualise, solving the visualisation element has driven a requirement to be able to better blend data, which in turn drives changes in how we acquire it.

And you know what happens after that? Businesses will start to realise that a lot of the information they've spent years trying expensively to assemble, won't on its own work the miracles that they hoped it would. Not without some other major changes happening too.

My favourite quote from last week's conference came from Fawad Qureshi of Teradata.

"Old business process + expensive new technology = expensive old business process"

That will be part two of this post. When you've got to your ultimate suite of business reports and they're easy to maintain, what happens then? What changes? Does anything happen at all?

Thursday, 22 May 2014

The insular world of marketing




It's election day! And it's an election day that I'm personally fascinated by, in terms of whether the pre-election polls are anywhere near accurate.

Take a look at the image above. The Sun and YouGov are predicting a narrow UKIP win.

Do you know anybody who's said they're voting UKIP? I don't. Maybe you've got a batty aunt, or a slightly racist grandparent who makes you cringe now and again in public, but do over a quarter of people you know intend to vote UKIP?

Probably not.

This effect caused me to lose a tenner, betting on the London Mayoral election that saw Boris Johnson beat Ken Livingstone. The bookies has Boris as nailed on favourite, but I only knew one person who planned to vote for him. Nobody I knew could name many people who planned to vote for Boris either.

Of course you often surround yourself with like-minded friends, but work colleagues and acquaintances were vehemently anti Boris and surely your work colleagues are a decent random(ish) sample of different opinions?

It turns out not and I lost my tenner.

If you're here, reading this, then you're likely a thoughtful, analytically minded person with either a marketing or football analysis interest. Probably, you're not planning to vote UKIP and you don't know many - or even any - people who are.

Does this matter? In marketing, I think it does. We're trying to sell products to the population of the UK in general and to do that, we need to understand what motivates people in general, not just people like ourselves.

Walk into any big marketing agency in London and the people you'll meet will predominantly be:

  • Under 35. Many will be under 25.
  • University educated.
  • White.
  • Renting their home.
  • Unmarried
  • No kids
  • Travelling daily on public transport. Mainly on the tube, which obviously only exists in London.
That's a very narrow selection. Even the simple fact that all of these people live in London makes their day-to-day life quite unlike that of 85% of the UK population.

I work for MediaCom North - based in Leeds - and so some of the regional biases are removed in our office, but I bet I still couldn't find a UKIP voter here. I'd be staggered if over a quarter of the voters in the office supported UKIP.

As marketing people, we need to be acutely aware of our own inherent biases so that we can avoid them. Look at the adverts running on TV on any night of the week and ask yourself how many are designed to appeal to an under thirty year old audience. Then ask yourself, honestly, if most of the people buying that product are likely to be under thirty. Cars? Nope. Supermarket shoppers? Nope. Holidays? Nope.

For me, agencies need to be doing much more immersion into the lives of people who don't think like themselves (and I mean real immersion, I love stats as much as the next guy but they're a starting point, not the whole solution). A once a year factory visit or focus group just doesn't cut it.

We should also be hiring and retaining a more diverse mix of people, particularly people over thirty five. If the problem is that those people leave London when they hit their mid-thirties then maybe we need some more innovative solutions to tap into their opinions and experience.

Finally, as a client, I'd be looking seriously at non-London agencies to get some wider perspective. A global car manufacturer would naturally look to the scale of the big London agencies - and maybe they should - but they need to be aware that the people working on their account almost certainly don't own a car, have the money to buy one, or anywhere to park one if they did. That's why virtually all car ads are either full of young people, or a very crude caricature of older people.

Could your agency advertise UKIP and really understand what motivates all of those people who plan to vote for them? Or would you end up with a stereotyped portrait, produced by a youthful, liberal-leaning, well educated planner?

Of course, the question of whether you should take that brief is a whole other issue.

Monday, 19 May 2014

Bigger data isn't necessarily better

Sometimes it's hard being a statistician. Sometimes a long established statistical concept jars with your audience and no matter how hard you try to explain it in plain terms, you can see in the audience's eyes that they don't really believe you. Those suspicious eyes staring back at you are fairly sure you're pulling some shenanigans to get out of working harder, or to wring an answer from the data that isn't really there. What you're saying just feels wrong.

Explaining sampling can be like that, particularly when you're dealing with online data that comes in huge volumes and fighting against a tidal wave of 'Big Data' PR.

The audience's thinking goes...

More data is just better, because more of a good thing is always better.

More data must be more accurate, more robust.

More impressive.

Then a statistician says, "We only need 10% of your file to get you all the answers that you need".

And rather than sounding like an efficient, cost effective analysis, it feels disappointing.


"You only need a spoonful of soup to know what the whole bowl tastes like"


A common question from non-statisticians is to ask, "Overall, I have five million advert views [or search advert clicks, or people living in the North East of England, or whatever], so how big does my sample size need to be?"

Which sounds like a sensible question, but it's wrong.

Statisticians call that overall views number the "Universe" or "Population". It's the group from which you're going to draw your sample.

Once your population is bigger than about twenty thousand, it makes no difference at all to the size of the sample that you need. If you say that you've got one hundred million online advert views, and ask how big your sample needs to be, then the answer is exactly the same as if you had fifty million views. Or two hundred million.

Which probably sounds like statistical shenanigans again.

Think about it like this. I've got lots of ping-pong balls in a really big box and I tell you that some are red and some are white and they've all been thoroughly mixed.You can draw balls from the box one at a time until you're happy to tell me what proportion of each colour you think is in the box. How many ping pong balls do you want to draw?

Seriously, pause and have a think, how many do you want to draw? It's a really big box and you'll be counting ping pong balls for a week if you check them all.

Let's start with ten. You draw ten balls and get four red and six white.

Is the overall proportion in the box 60/40 in favour of white? It might be, but you're not really sure. Ten isn't very many to check.

You pull another ten and this time you get five more of each colour. Now you've got eleven white and nine red. Happy to tell me what's in the box yet? No?

Let's keep drawing all the way up to 100 ping pong balls.

Now you've got 47 whites and 53 reds. The proportion seems like it's close to 50/50, but is it exactly 50/50 in the rest of the box?

Every time you draw more ping-pong balls, you get a bit more sure of your result. But have you noticed that we haven't mentioned once how many balls are in the box in total; only that it was a big box? It's because it doesn't matter.

As long as the population is "big" and we draw balls at random, it doesn't matter how big it is.

Here's how your confidence in the result changes as you draw more ping-pong balls from the box:


The bigger your sample, the better your accuracy, but beyond a certain size - say 5,000 - your result is highly accurate and having an even bigger sample doesn't make very much difference.

"But!", say the objectors, "Online, data is basically free and we can use the whole dataset, so we should!"

And that's true, up to a point. Data storage is so cheap it's close to free, but data processing isn't. A large part of the cost is in your own time - you can wait ten minutes for a results dashboard to refresh, or you can sample the data, wait thirty seconds and get the same answer. It's your choice, but personally I like faster.

Outside the digital world, storage is still cheap, but data collection can get really expensive.

The TV industry in the UK is constantly beaten with a stick based on the fact that TV audience figures are estimated using a sample of 'only' 5,100 homes. It costs a lot to put tracking boxes into homes and this number has been arrived at very carefully, by very well trained statisticians. It's just enough to measure TV audiences with high accuracy, without wasting money.

In fairness, The BARB TV audience panel is challenged by a proliferation of tiny satellite TV channels - because sometimes nobody at all out of those 5,100 homes is watching them - and by Sky AdSmart, which delivers different adverts to individual homes. It may need to adapt using new technology and grow to cope, but nobody is seriously suggesting tracking what everybody in the UK watches on TV, at all times, on all devices. That would be ridiculous.

I'll be blunt. Any online data specialist who uses the 5,100 home sample to beat 'old fashioned' TV viewing figures, doesn't know what they're talking about.

Sampling is an incredibly useful tool and sometimes more isn't better, it's just more. More time to wait, more computer processing power, more cost and more difficulty getting to the same answer.

.

Monday, 7 April 2014

Visualising Everton 3 - 0 Arsenal

I've been playing with 3D visualisations of Opta football data over the past few weeks, trying to build a picture of the action areas in a game. This post is me thinking out loud more than a finished product, but there's definitely something about 3D mapping that does work.

3D is usually to be avoided (particularly in pie charts!) and I've said as much in my guide to data visualisation for marketers. The problem when visualising touches in a football game on a flat pitch though, is that very often you'll see something like this:



It's obviously displaying too much data. Converting to a heat or contour map helps, but unless differences between areas are very starkly defined, it doesn't make important areas of the pitch really jump out.

So, 3D...

I've taken the data from the Everton vs. Arsenal game yesterday and with R and rgl, used it to create a contoured surface. Add flags for for shot locations and a textured surface for the pitch and you get the images below.





You can see - as we've found before - how Everton concede the centre in favour of the wings and the importance of Leighton Baines on Everton's left. Despite that ball movement through the wings, Everton's shot locations are more central than Arsenal's, with Arsenal taking a number of inaccurate shots from wide on the left. Everton's two goals came from almost the same spot, with the third being an Arteta own goal.

I'll keep posting these from time to time and working on the visualisation. They're not a finished product, but I like the effect and think it's worth persevering with. Any ideas, or games you'd really like to see? Let me know in the comments.

Tuesday, 11 March 2014

Mapping UK Adland

I've been putting together a lot of advertiser spend data recently, for our own internal Tableau dashboards, and thought it might be fun to throw the dataset at R too and make something less functional but a little bit prettier.

These are contour maps showing the locations of UK advertisers spending more than £500k on TV, radio, print and posters last year. Darker equals more businesses in the area and I've deliberately dropped legends to avoid cluttering up the maps.

Huge thanks to the people behind R and the ggmap package, who are much, much cleverer than I am!


UK businesses spending more than £500k on advertising in 2013 (Click for bigger)



Focussing on England and Wales...



It's not all about London...



Nobody goes South of the River...


Friday, 14 February 2014

Premier League attack patterns visualised

Yesterday, I posted some visualisations of approach play in the Premier League. They describe how passes into a 'shooting zone' in front of the goal tend to be more successful when they come directly, rather than from wide areas.

I've started to play with these visualisations for individual teams and a few people have asked how they look, so today I'm posting attack patterns for the current Premier League top seven. We're looking at the number and success rate of passes played into a boxed-out 'shooting zone'. Data covers the first half of the current Premier League season, up to the end of January.

For the following heat maps...

Size of square = number of passes
Colour of square = pass success rate

Large and green is good; large and red is not! It's important to look for clusters of colour rather than concentrating on individual squares because when we're looking at only one team, the number of passes included is lower.









Teams are attacking the goal on the right and are listed in order of current league position. Yes, I picked top seven because everybody wants to see how the Man United one looks.


Chelsea
Mixed approach with occasional long passes from deep. Larger number of incomplete passes from wide on the right.


Arsenal
High success rates with close, central passes and very rarely played long from deep. Significant volume of passes from advanced wide positions, but with low success rates.


Manchester City
Varied approach with good success rates from almost all areas.


Liverpool
Mixed approach with low volume of passes from very wide touchline positions. Attacks from right wing weaker than left.


Tottenham Hotspur
Greater success rates through the centre than from either wing, but high volumes of unsuccessful passes played from advanced and wide.


Everton
The Leighton Baines effect. High volume of passes from wide left but with low completion rates. Passes from advanced right also with low completion. Very few attempts through the centre and occasional long balls from deep.


Manchester United
Some approaches through the centre but attacks weighted towards wings. High volume of longer diagonal balls from the right, with low success rates.


Thursday, 13 February 2014

How can an attacking team get close enough to expect a goal?

There's been some great work done in football analytics recently, looking at a team's scoring chances from different positions on the pitch, which has led to the calculation of various Expected Goals (ExpG) metrics. However it's calculated, in essence ExpG gives a player's chance of scoring from a shot, given his position on the pitch. Add up the probabilities for a group of shots and you can work out how many goals a team 'should' have scored from them. Have a look at Statsbomb if you'd like to read up on what's been available up to now.

I've managed to assemble a decent sized database of pass and shot locations from across the first half of the 2013-14 Premier League season and wanted to see if I could take Expected Goals a step further. As an indicator of shot success, Expected Goals typically paints a picture of the penalty area, with the six yard box as a hotspot and becoming colder the further out you move from goal. To a certain extent, its outputs are relatively obvious; if you shoot from closer in, you have a higher chance of scoring and shots from further out are less likely to be converted.

That's not to say Expected Goals isn't a useful metric - far from it - but it doesn't do a great deal for our understanding of how to create goals. We can quantify how much better it is to shoot from closer to the goal, but how do you get closer to the goal in the first place? If your attacks break down trying to reach the shot conversion hotspot, should you even try to get there, or just take your chances from range?

A couple of days ago, I tweeted an image of pass completion data, which we'll be building on in this post.


Pass success rate by destination


The image shows the probability of completing a pass into different areas of the pitch. We're not worried about where the ball is coming from for the moment, but are looking at the chances of passes into different areas being successful.

It's clear to see how - playing from left to right - passing accuracy starts to break down in the opposition half and then drops dramatically at the boundaries of their penalty area.

Even with half a season's worth of passes and shots, we're going to struggle with the number of data points available as this analysis progresses, so let's merge the granularity of that first image into some larger pitch areas.


Pass success rate by destination



We now have a picture of how difficult it is to pass into each area of a football pitch. What about shots?

From the same dataset, here's an average player's probability of scoring with shots from different pitch locations. Penalties are excluded and I've hidden squares with fewer than twenty shots to clean the data up a little.


Shot conversion rate by shot location



As a manager, you're on the horns of a dilemma. Scoring probability climbs to over 30% in the centre of the six yard box, but your chances of passing the ball into that location are slim.

What if we combine the two visualisations?

Pass success rate multiplied by scoring probability, gives an indication of the likely success of an attacking strategy. Pass to an easier area outside the box and shoot from there? Or attempt to work the ball closer, at the risk of losing possession?


Pass success probability * shot conversion rate


It turns out to be far from a clear cut-choice. There's a relatively large area, stretching from the edge of the six yard box, to well outside the area, where penetrating that area with the ball and then scoring once you have are quite evenly balanced at 2-3%. It's not as simple as 'closer to the goal is better' and the balance in one game is almost certainly dependent on passing quality of the individual teams and how well their opponents defend.

If we box out that 2-3% conversion area, we can move the analysis on another step.


Pass success probability * shot conversion rate


How should a team attempt to move the ball into that boxed-out shooting zone? There are three broad choices: Directly from the direction of the centre circle, diagonally, or from the wings.

David Moyes has come in for a lot of criticism this week following Manchester United's draw with Fulham, where his players hit over eighty crosses in ninety minutes. We should be able to show here whether crossing, or a direct approach, is the more successful strategy.


Probability of achieving a successful pass into shooting area


Note that I've changed the colour scale on the above image to peak at 75% rather than 100%, since the average success rate of these passes is lower than when considering the whole pitch. Squares are only shown if they've been the origin of at least twenty passes.

Once you move beyond the eighteen yard line, pass success probability drops off quickly. Touchline crosses from a 'chalk on his boots' classic winger have success rates as low as 30%. Other things being equal, the best chance of passing the ball into our key zone comes from a direct, or diagonal move.

If you're thinking "but that's not fair, most of the passes included here will be targeted at locations outside the box", then you're right. Let's tighten up our key shooting zone, to a central area of the eighteen yard box surrounding the penalty spot.


Probability of achieving a successful pass into close shooting area


Still want to hit crosses all day?

The probability of a pass from the wings finding a team mate in the shooting zone is 30-40%, while moving through the central area has a success rate of 40-50%.

This isn't the end of the story, but it's where I'll stop for now. There are many more factors to be considered, including absolute volume of passes and the fact that a successful pass isn't the same as creating a shooting chance. This analysis will provide a base to work from though and one that I'd like to extend next into different types of teams.

Ultimately, I hope that this type of analysis could answer question such as...

Should teams with worse passing shoot more often from long range? And vice versa, where is the optimal shooting area for a team that passes with a very high success rate?

How do optimal strategies change, based on specific opponents?

(using significantly more data) Can we identify hotspots where passes into the shooting zone have higher success rates? Versus specific opponents? When specific defenders are on the pitch?

Eventually, I believe an approach like this might be able to identify defensive weaknesses in a specific team and optimal attack strategies for their opponents.