Anniversaries are more than mere dates on a calendar. Whether the events they recall are happy, sad or more mixed, they give us an opportunity to reflect on the past (‘how things have changed since then!’) and to look to the future (‘I wonder where we’ll be in another five years?’).
I started working with the police on 8 March 1982, so I have recently celebrated my first 40 years of working in this field.
This was a very significant time for British policing. Not because of my arrival on the scene, I hasten to add. But I do believe we should use anniversaries for self-reflection, to help us understand our place in time. And I have to admit how fortunate I was to have started my career in the early 80s.
A few months before I started, the Scarman Report on the Brixton riots was published; Scarman’s recommendations were to be incorporated in the 1984 Police and Criminal Evidence Act. And a few months after I started, Sir Kenneth Newman became Commissioner of the Met, and led radical managerialist reform by establishing a version of policing by objectives as the model for divisional management. The following year, Home Office Circular 114/83 required chief constables, ever hungry for more resources, to demonstrate that they were using their existing resources effectively and efficiently.
Policing was opening up to increasing scrutiny and accountability. And – significantly for me, with my background in the analysis and interpretation of data – the police found themselves needing to use wider sources of information. Questionnaire surveys were coming into fashion, for consultation and for assessing police effectiveness. Social and demographic data were beginning to be seen as sources of insight in predicting public order flashpoints. And with the need to demonstrate effectiveness and efficiency, as well as progress towards local objectives, came the first glimmer of recognition that counting, measurement and statistics had a vital part to play in police management.
Watching and counting
We learn to count as infants, and numbers remain central throughout our lives in so many ways. Sometimes a number will be straightforward, unproblematic. How many bricks do I need to build a new garden wall? I can work this out, provided I know the dimensions of the wall and of the bricks. And I’ll add a few extra in case of mishaps.
But sometimes the meaning of a number will be problematic. Behind nearly every number there is (or should be) a nagging question: what does it mean?
I had a memorable introduction to this problem back in 1983, when I was studying the introduction of neighbourhood watch in London. In the division where I was working, 12 neighbourhood watch schemes were initiated in the course of the first year. But I knew of another division in London where they had more than 250, more than twenty times as many!
It didn’t take me long to find out the reason for the dramatic difference.
In the former division, each scheme was run by a community police officer in conjunction with a tenants’ or residents’ association; they had meetings, newsletters, crime prevention talks and property marking sessions, as well as signs on lampposts and stickers in windows. Being a ‘member’ of one of these schemes actually meant something.
In the latter division, they did a mass leaflet drop, inviting residents to put a sticker (provided with the leaflet) in their window. A couple of weeks later they sent special constables round the division to look for stickers in windows. If a street or housing estate had at least one sticker in a window, then it qualified as a neighbourhood watch scheme.
The following year, in response to a parliamentary question, the Conservative Home Secretary David Waddington stated that there were more than 125,000 neighbourhood watch schemes throughout England and Wales. I felt I had a privileged insight into how such an impressive number had been achieved. Sure, there would be some worthwhile, substantive initiatives among them. But how many were merely stickers in windows?
(Allow me moment of nostalgia here. Following this parliamentary statement, the editor of Radio 4’s World at One invited me to comment on this, so I went to Broadcasting House to be interviewed by Sir Robin Day, the foremost political interviewer of the time. Ah, the kudos, I thought! Unfortunately, the great man, accustomed to interviewing heads of state, was completely uninterested in talking to a pipsqueak researcher, and the interview was brief, superficial, and never broadcast. Ah, well…)
Statistics of course
Collecting and analysing data became more important through the 80s, although it was mostly done by police officers themselves. Each police force employed a statistician, but their work was largely concerned with collating crime figures to submit to the Home Office. But by the early 90s, forces were beginning to establish specialist research posts, filled by police staff. With a clear training need, I found a niche, providing courses on all aspects of research methodology, in particular statistical analysis. With the advent of the National Intelligence Model in 2000, the role of analyst was borne, and the demand for statistics training was well established.
It has been my good fortune to have taught statistical methods to police analysts (and quite a few police officers) for more than 30 years now. It has been rewarding and interesting, and I’ve met some really nice people along the way.
Of course, it has also been challenging. That’s why it continues to be interesting. One of the challenges is for the analyst to persuade her audience that statistical analysis as a means to an end, not an end in itself. Ultimately, the numbers aren’t important, it’s what they mean. Disciplined interpretation is a skill in itself.
Another challenge is posed by the ease with which statistics can be calculated. By all means delegate the calculation of your standard deviation or chi squared test to a computer. Why on earth would you work it out on a calculator? But if you want to use these statistics properly – to be an informed practitioner, to use Donald Schön’s phrase – you need to understand how and why they work, what their limitations are, and in what circumstances they can and can’t be used.
And speaking of standard deviations and chi squared tests (and means, medians, correlation coefficients, and all the others that analysts will be familiar with). As well as the Poisson distribution and Student’s t test (which you may be a little hazier about). Not to mention the Kolmogorov-Smirnov test, or Wald-Wolfowitz runs test…
Yes, there are so many tests that choice is bewildering. It was ever thus: at university (and I’m now looking back well beyond 40 years!) we all struggled with the question of which statistical test to use in what circumstances. It is one of the most intrinsically difficult problems facing the analyst, and one for which no perfect algorithm can be written (although I’ve had a go at writing my own imperfect one).
This may be why so many analytic reports fall back on the standard deviation. It is simple to calculate, easy to understand, and familiar to present in the form of ‘control charts’ (which aren’t really control charts at all, but that’s the subject for another blog).
The standard deviation is an important and potentially very useful statistic, in its place. But it is overused, with the result that police performance reports often reflect a sort of analytic monoculture. I have to confess that I bear some responsibility for its overuse, because it has featured in every introductory statistics course I have ever taught.
Statistics is more than just a toolkit: it’s about the disciplined interpretation of data. But to serve that end effectively, analysts need to use a wider range of tools, to summarise and draw conclusions from statistical evidence. It’s time to diversify!