Foreword | The Australian Institute of Criminology has spent a number of years working with crime prevention agencies across Australia reviewing large-scale programs that involve the delivery of varying activities directed at the prevention of crime. Taken as a whole, this experience has shown that, despite good intentions and aspirations to evidence-based practice, both the level and quality of evaluations have been limited by several practical challenges. In turn, this has hampered efforts to develop a body of good quality Australian evidence about what is effective in preventing crime and what is required in order to deliver effective interventions.
Using previously unpublished data collected as part of the reviews of two national Australian crime prevention programs, the authors examine the practical factors that impact on evaluation and make a number of important recommendations for the evaluation of projects delivered as part of large-scale community crime prevention programs. The authors argue that rather than persisting with traditional approaches that encourage local organisations to undertake potentially expensive and time-consuming evaluations of their own work, program managers and central agencies must become more proactive and increasingly innovative in their approaches to evaluation.
Adam Tomison
Director
A basic principle underpinning modern crime prevention is that it requires the practical application of research and evaluation findings in the development and implementation of strategies to reduce crime (AIC 2012; ECOSOC 2002). Evaluation is therefore an important prerequisite for effective crime prevention. A good evaluation can determine whether a program has been implemented as planned (and if not why not), what outcomes have been delivered as a result, whether the stated objectives of that program have been achieved and the reasons that a program did or did not work. This can inform improvements to that program, as well as decisions about whether it should be continued. It also contributes to the development of a sound evidence base that can be used by policymakers and practitioners in deciding what to do to (and how to do it) to address the crime problems that confront them.
However, despite growing recognition and support for the evaluation of crime prevention efforts internationally (Bodson et al. 2008; Idriss et al. 2010), several reviews of local crime prevention programs delivered in Australia have highlighted notable deficits in both the amount and quality of evaluation practice (Anderson & Homel 2005; Anderson & Tresidder 2008; Homel et al. 2007; Willis & Fuller 2012). This has had important implications for the quality of the evidence base available to decision makers in this country.
In this paper, the factors that have impacted on the standard of evaluation in large-scale community crime prevention programs are examined. Drawing upon a number of crime prevention capacity building projects undertaken by the Australian Institute of Criminology (AIC), several recommendations to enhance the level and quality of crime prevention evaluation are proposed.
Community crime prevention in Australia
In Australia, there has been an emphasis on delivering crime prevention through a community-based approach (Cameron & Laycock 2002; Cherney & Sutton 2007; Henderson & Henderson 2002; Homel P 2005). This approach has been reflected in both national and state and territory crime prevention programs (CPQ 1999; Homel et al. 2007; NSW DPC 2008; OCP 2004). Similar approaches have been adopted internationally, including in the United Kingdom, Canada and New Zealand (Homel et al. 2004; Idriss et al. 2010; IPC 2008; NZ Ministry of Justice 2003).
In practice, this has involved central agencies responsible for crime prevention policy developing an overarching program, strategy or framework that outlines the overall goals and priority areas and (in some cases) the general approach to preventing crime, which provides the basis for the coordination of relevant stakeholders (UNODC 2010). Central agencies often provide short-term funding or technical support or establish partnerships with regional branches of government authorities, local government and non-government organisations to plan and deliver crime prevention initiatives, and to implement the national or state and territory strategy (Henderson & Henderson 2002; Morgan 2011).
A range of intervention types has been delivered through this approach. There has been an emphasis on community development and engagement, particularly through local government crime prevention planning processes, but also involving police, government and non-government organisations (Homel P 2005; Morgan 2011; Morgan & Homel 2011; Pugh & Saggers 2007). There has been considerable growth in developmental crime prevention, which refers to those initiatives that involve providing basic services or resources to individuals, families, schools or disadvantaged communities to reduce risk factors and enhance protective factors for crime and antisocial behaviour (Homel et al. 1999; Homel R 2005; Weatherburn 2004). For example, a review of projects funded by the National Community Crime Prevention Programme (NCCPP) and more recently, the Proceeds of Crime Act 2002 (POCA) funding program, shows that there has been significant investment in personal development projects, support services and initiatives designed to enhance service coordination, and that these initiatives were commonly targeted towards at-risk young people and individuals at risk of reoffending (Homel et al. 2007; AGD 2011). Education projects, employment and vocational skills training, diversionary activities for young people (such as sport and recreation projects), mentoring and arts development projects also proved popular (Homel et al. 2007).
Situational crime prevention and broader urban planning initiatives, which aim to modify the physical environment to reduce the opportunities for crime to occur, have also become increasingly common (Sutton, Cherney & White 2008). This has included crime prevention through environmental design, closed circuit television in public spaces and a range of awareness campaigns and target hardening measures to improve personal, vehicle, household and business security (Bodson et al. 2008; Clancey 2010; Grabosky & James 1995; Gant & Grabosky 2000; Homel et al. 2007; Wilson & Sutton 2003). The growth in popularity of environmental approaches reflects the increasingly prominent role of local government, who have been responsible for leading the development of local crime prevention plans, developing and implementing crime prevention initiatives in partnership with other stakeholders, and performing a range of functions relevant to environmental crime prevention (Anderson & Homel 2005; Anderson & Tresidder 2008; Cherney & Sutton 2007; Homel P 2005; Morgan & Homel 2011).
The crime prevention evidence base
The emphasis on delivering crime prevention through small-scale community-driven projects has had practical implications for the way approaches to evaluation work have also developed. This has, in turn, impacted on the evidence base available to practitioners.
Large-scale systematic reviews have shown that there is an accumulated body of high-quality research demonstrating the effectiveness of different approaches to crime prevention (eg Sherman et al. 2006). However, the number of Australian initiatives included in the systematic reviews, meta-analyses and databases describing effective (and ineffective) interventions is relatively small when compared with other countries and the level of crime prevention activity delivered in Australia (Aos et al. 2011; Manning, Homel R & Smith 2010; Sherman et al. 2006).
A recent review of community-based crime prevention strategies suitable for implementation by local government to address a number of priority crime types in New South Wales reaffirmed that the level, quality and strength of the evidence in support of different intervention types varies considerably (Morgan et al. 2012). While there was strong evidence in support of some situational methods and the requirements for their successful implementation, much of this evidence was from international studies completed over a decade or more ago. Other common community-based prevention strategies did not appear to be supported by evidence of effectiveness (or evidence demonstrating that they were not effective). For example, there was little evidence about the effectiveness of diversionary projects targeting young people (such as afterschool recreation), which are a common strategy employed by local government, police and other community-based organisations.
There was also limited evidence in support of community-based initiatives that involve delivering services to individuals at risk of becoming involved in crime, or offenders at risk of reoffending (eg skill development, support services or employment programs), although this may be in part explained by the methodology used for the specific review. The review also confirmed that there continues to be a lack of evidence as to the effectiveness of efforts to modify community-level factors such as social cohesion, access to housing, employment and education to reduce violent or property crime (Hope 1995; Morgan et al. 2012; Welsh & Hoshi 2006), although again this may be a function of the absence of quality evaluation studies in this area.
Further, while there is strong evidence in support of early developmental prevention programs across a range of outcome domains (Farrington & Welsh 2006; Manning, Homel & Smith 2010), evidence of the effectiveness of developmental crime prevention in reducing juvenile offending in Australia has been limited to a small number of demonstration projects (eg Homel et al. 2006), which are yet to be implemented and proven on a larger scale.
The lack of recent, high-quality evidence from Australia or evidence in support of common intervention types is likely a result of there having been less emphasis on systematically generating evidence than on applying the available research (Homel R 2005; Homel 2009a; Sutton, Cherney & White 2008). In Australia, the focus on the community-based approach to crime prevention has also seen much of the responsibility for evaluation devolved to local agencies, including local government and non-government organisations—the very agencies with the least resources to undertake quality evaluation work (Anderson & Homel 2005; Anderson & Tresidder 2008; Homel et al. 2007).
Experience from both national and state and territory crime prevention programs suggest that this approach has been largely unsuccessful at generating high-quality evaluations. The vast majority of initiatives implemented have not been subject to evaluation or ongoing monitoring and where they have been evaluated, the methodological rigour of those evaluations has often been poor (Anderson & Tresidder 2008; English, Cummings & Stratton 2002; Homel et al. 2007; Willis & Fuller 2012). Overall, there is considerable scope to improve the way in which crime prevention programs and projects are evaluated.
Evaluation good practice
Two types of evaluation are common in crime prevention—process and outcome evaluation (Idriss et al. 2010). A process evaluation aims to improve understanding of the activities that are delivered as part of a program and assess whether they have been implemented as planned. An outcome evaluation is more concerned with the overall effectiveness of the program. The range of questions that can be addressed by both types of evaluation is presented in Table 1.
Process evaluation questions | Outcome evaluation questions |
---|---|
What are the main components or activities delivered as part of a program? Is the program currently operating or has it been implemented as it was originally designed (ie program fidelity)? What are the characteristics of the problem, places and/or people being targeted by the program? Are the intended recipients of a program accessing the services being provided, do they remain in contact with the program and does the program meet the needs of participants? What is the nature and extent of stakeholder involvement in all stages/aspects of the program? Is the program consistent with best practice in terms of its design and implementation? What factors impact positively or negatively upon the implementation or operation of the program? How appropriate are the governance arrangements, operating guidelines and where applicable, legislative framework in supporting the operation of a program? What is the cost associated with the operation of the program? Is the program adequately resourced? How efficient has the program been in delivering key activities? What improvements could be made to the design, implementation and management of the program? |
To what extent has the program achieved its stated objectives? Did the program make a difference in terms of the problem it sought to address? What outcomes that have been delivered as a result of having implemented the program? What impact has the program had in the short term on participants’ knowledge, attitudes, skills or behaviour? Are these outcomes sustained over time? What longer term outcomes have been delivered, including the impact on crime rates and community safety? Who else benefits from the program, such as program staff, stakeholders and government? Were there any unintended consequences or outcomes from the program? Which program activities or components contributed to the short and long-term outcomes that have been observed? What external factors impacted positively or negatively on the effectiveness of a program and the outcomes that were delivered? What changes could be made to the program to improve its overall effectiveness? What are the financial benefits of a program relative to the costs associated with its operation (return on investment)? |
There is no single best approach to addressing these questions. Selecting an appropriate evaluation design and research method requires consideration of the characteristics of a program, the purpose of the evaluation, the available options and the views of key stakeholders (English, Cummings & Stratton 2002).
However, there are certain accepted standards that can help guide decisions on how to conduct evaluation. The United Nations Evaluation Group specified eight criteria for good evaluations:
- the evaluation process should be transparent;
- evaluators should have the necessary expertise;
- evaluation should be conducted by someone independent of the program;
- evaluators should remain impartial and involve a wide range of stakeholders;
- the evaluation design and methodology should be purpose driven;
- there should be adequate planning for evaluation so that the necessary data can be collected;
- the evaluation design and methods should be high quality; and
- there should be follow-up to the evaluation to ensure that the recommendations have been implemented (Idriss et al. 2010; UNEG 2005).
Specific standards have also been developed for assessing the quality of outcome evaluations. There are a variety of different approaches to measuring crime prevention outcomes, including experimental research designs (eg randomised control trials, quasi-experimental designs and pre and post-test comparisons), qualitative inquiry, participatory action research and realist or theory-driven approaches. However, experimental research designs have been the most common in crime prevention (Idriss et al. 2010). The Scientific Methods Scale (SMS) was therefore developed as a guide to assess the methodological quality of outcome evaluations and forms the basis of systematic reviews of crime prevention methods undertaken by the Campbell Collaboration (Farrington et al. 2006; Sherman et al. 2006). While the SMS has its critics (Hope 2005) and there are limitations with this approach (discussed later in this paper), it has become recognised as an important reference for both researchers and evaluation audiences. A research design that achieves level three on the SMS, with measures of the outcome pre and post intervention and an appropriate comparison group against which to compare results (a quasi-experimental design) is considered the minimum design for drawing valid conclusions about the effectiveness of a crime prevention intervention (Farrington et al. 2006; Sherman et al. 2006). In practice, this standard frequently proves difficult to meet, as illustrated by the experience of the Australian Government’s NCCPP and POCA funding program, used here as illustrative case studies.
The National Community Crime Prevention Programme and Proceeds of Crime Act 2002 funding program
The NCCPP and POCA funding program, both administered by the Australian Government Attorney General’s Department, were large-scale competitive grants programs that provided short-term funding to community-based organisations to deliver local crime prevention initiatives. The AIC was commissioned to undertake a review of both programs—the review of the NCCPP was completed in 2007 and the evaluation of the POCA funding program in 2012. Both reviews involved a multi-level information and data collection and analysis process. This included a review of projects funded as part of the program using data collected according to a comprehensive classification scheme developed specifically for the review. They also involved the distribution of a qualitative survey to managers of funded projects that addressed aspects of project design, implementation and review, consultation with key stakeholders through a series of interviews and a workshop and a national survey of crime prevention professionals (NCCPP only).
A key feature of both the NCCPP and POCA funding program was that each project’s funding agreement had a requirement that provision be made for project evaluation. As part of the review process for each program, information on the evaluation methodology and results was collected for 223 NCCPP-funded projects and 35 POCA-funded projects. Among the projects reviewed by the AIC, 17 percent (n=37) of organisations in receipt of NCCPP funding and 40 percent (n=14) of organisations in receipt of POCA funding had submitted a final evaluation report at the time of collating project information. The remainder had outlined a proposed methodology in their grant application and funding agreement, but had not yet submitted a final evaluation report. Information was recorded on the evaluation methodology for all projects. For those projects that had submitted a final report, this data described the actual evaluation methodology, while for those projects yet to submit a final evaluation, it reflected the proposed evaluation methodology.
Analysis of data for both programs revealed that the quality of final evaluation reports and proposed evaluation methodologies varied across projects (see Table 2). The most common evaluation methods proposed or used by organisations in receipt of funding involved formal mechanisms to collect feedback from project stakeholders, workers and participants at the completion of the project (or following their contact with the project). Funding recipients were less likely to propose or to use methods that required longer term planning and investment and that provide the most useful information about the overall impact of a project, such as comparing crime data, self-report data, observational data or community surveys collected before and after the project. As a result, few evaluations (completed or proposed) met level three, or even level two (measures of the outcome pre and post intervention without a comparison group), on the SMS.
NCCPP | POCA funding program | |||
---|---|---|---|---|
Projects not yet evaluateda (n=186) | Projects that had been evaluatedb (n=37) | Projects not yet evaluateda (n=21) | Projects that had been evaluatedb,c(n=14) | |
Research methodsa | ||||
Anecdotal evidence from staff and participants of project effectiveness | 66 | 92 | 29 | 43 |
Formal feedback sought from participants/clients (eg surveys, formal interviews) | 56 | 78 | 76 | 64 |
Formal feedback sought from key stakeholders | 64 | 59 | 38 | 50 |
Formal feedback sought from project workers | 52 | 54 | 52 | 57 |
Comparison of observational data pre and post intervention | 44 | 35 | 67 | 71 |
Survey self-reported behaviour/attitude pre and post intervention | 21 | 16 | 33 | 36 |
Community survey of fear/perception of crime pre and post intervention | 20 | 5 | 14 | 0 |
Comparison target area crime statistics pre and post intervention | 28 | 5 | 5 | 0 |
Comparison target group crime statistics pre and post intervention | 17 | 5 | 5 | 0 |
Survey self-reported offending pre and post intervention | 5 | 0 | 19 | 7 |
Responsibility for evaluation | ||||
Organisation in receipt of funding (internal evaluation) | 52 | 65 | 35 | 64 |
Consultant or project partner (external evaluation) | 38 | 32 | 30 | 7 |
Combination of internal and external evaluation | 4 | 0 | 10 | 7 |
Unclear | 6 | 3 | 25 | 21 |
a: Refers to those projects that had outlined an evaluation methodology but had not yet evaluated their project. Reflects proposed evaluation methodology
b: Refers to those projects that were completed and had been evaluated. Reflects actual evaluation methodology
c: Projects funded in the earlier rounds of the POCA funding program primarily involved the provision of drug and alcohol treatment and may not have identified a reduction in crime as one of their primary objectives. This may have influenced the type of research methods used as part of the evaluation
Note: Individual evaluations could use multiple methodologies. Excludes 6 NCCPP funded projects and 4 POCA funded projects for which information on the evaluation methodology or status of the evaluation was unclear
Source: AIC NCCPP project database 2008 [computer file] n=223; AIC POCA project database 2012 [computer file] n=35
A comparison of the methodologies proposed by projects yet to be evaluated and those that had been evaluated, particularly among NCCPP-funded projects, showed that the application of certain methods were often planned by organisations yet to conduct an evaluation but less common among those projects that actually had been evaluated. For example, the proportion of projects that reported that they either had used or would be using some measure of knowledge, attitude, skill or behavioural change pre- and post-intervention to measure the impact of their project was higher among those projects that had not yet been evaluated. Conversely, the proportion of NCCPP and POCA-funded projects that relied on anecdotal evidence was higher among projects that had been evaluated. Some care needs to be taken in interpreting these results, given that the two groups of evaluations (planned and actual) do not relate to the same projects. While further analysis showed there was little difference in terms of the organisations and types of projects funded by the NCCPP that had and had not been evaluated, POCA-funded projects that had been evaluated were more likely to be drug and alcohol programs. Nevertheless, the differences between proposed and actual evaluation methods may be explained, at least in part, by the challenges encountered by project managers in undertaking evaluation (described below).
It is worth noting that in the case of the NCCPP, the projects that had submitted a final evaluation report were primarily from the earlier funding rounds. The funding body subsequently implemented strategies to improve the quality of evaluations submitted for projects from later rounds. This included efforts by the funding body to facilitate appropriate evaluation work during the contract negotiation stage and engaging the AIC to provide advice and assistance in the development of evaluation strategies for individual projects.
Further, the large proportion of projects funded by the NCCPP and POCA funding program to have committed to an external evaluation (around one-third) reflects growing recognition among community-based organisations of the importance of and potential benefits from engaging experienced personnel to evaluate their crime prevention activities. This was also due to encouragement from central funding bodies for project teams to engage skilled evaluators and to allocate sufficient funds for that purpose.
Nevertheless, due to the limitations with evaluation, it was difficult to assess the overall impact of the activities funded by either program. Among the comparatively small number of projects that had been completed and evaluated, many outcomes had gone unreported or the methodology was inadequate to draw any definitive conclusions. For example, of the 37 NCCPP-funded projects that had been evaluated, 24 projects aimed to reduce the rate of crime or antisocial behaviour among their target population or location (see Table 3). However, in reviewing the final evaluation reports, only eight provided evidence of any measurable change (of which 7 had demonstrated some success). Determining whether this could be attributed to the projects themselves was almost impossible (due to the absence of a suitable comparison group, among other reasons).
Total number of projects with outcome listed as objective | Successful in delivering outcome | Unsuccessful in delivering outcome | Unclear whether outcome delivered | |||
---|---|---|---|---|---|---|
NE | E | NE | E | |||
Decrease risk factors/increase protective factors associated with offending among target group | 25 | 7 | 10 | 0 | 2 | 6 |
Reduction in targeted criminal/antisocial behaviour | 24 | 4 | 7 | 1 | 1 | 11 |
Increased support for the targeted offender group or people in danger of becoming involved/victims of crime | 20 | 3 | 12 | 0 | 0 | 5 |
Establishment and/or extension of a new/existing position/program | 19 | 2 | 15 | 0 | 0 | 2 |
Increased community knowledge/awareness of targeted behaviour/issue/problem | 18 | 8 | 4 | 0 | 0 | 6 |
Identify and engage local services | 15 | 4 | 6 | 0 | 0 | 5 |
Increased community/social values | 15 | 6 | 5 | 0 | 0 | 4 |
Increased use of organisational services/resources | 13 | 3 | 8 | 1 | 1 | 0 |
Increased communication between the targeted group of people and the police and/or criminal justice agencies | 10 | 2 | 2 | 1 | 0 | 5 |
Reduction in community levels of fear of crime | 9 | 3 | 2 | 0 | 0 | 4 |
Increased support for victims and witnesses of crime | 8 | 2 | 3 | 0 | 0 | 3 |
Improved access to legal services/understanding of the legal system for the targeted group of people | 7 | 1 | 3 | 0 | 0 | 3 |
Increased reporting of an offence among targeted community | 5 | 1 | 0 | 0 | 0 | 4 |
Improved response time to the targeted crime or antisocial behaviour(s) | 3 | 1 | 1 | 1 | 0 | 0 |
Note: NE=No evidence—the evaluation made an unsubstantiated statement in relation to this outcome. E=Evidence—the evaluation included documented evidence (qualitative or quantitative) to demonstrate this outcome. Includes only those projects that had been completed and have been evaluated
Source: AIC NCCPP project database 2008 [computer file] n=223
What was evident however was that completed evaluations more commonly provided evidence relating to project outputs, such as the establishment or extension of an existing project or the increased use of an organisation’s services. Where evidence of outcomes was documented, this ‘evidence’ was often subjective in nature and predominantly referred to the qualitative data collected from project staff, stakeholders, and participants and their family members on their perceived impact of project activities (often in a way that was not rigorous or transparent). Many project evaluations failed to distinguish between short, medium or long-term outcomes and therefore either focused entirely on outputs or made unsupported assertions about the impact of their work.
None of these findings are intended as specific criticisms of the managers of the funded projects, per se. The practical difficulties facing community-based organisations in undertaking adequate evaluations have been widely acknowledged (Cameron & Laycock 2002; English, Cummings & Stratton 2002; Morgan et al. 2012). For example, respondents to a national survey of crime prevention professionals conducted as a part of the NCCPP review supported the view that, for the most part, community-based organisations have limited access to adequate support, lack the internal ability to undertake evaluation or the capacity to engage third parties to assist them in undertaking evaluation activities. There were also concerns as to the extent to which organisations have access to useful information and resources to assist them in undertaking an evaluation.
Many community-based organisations funded by the NCCPP and the POCA funding program reported having difficulties providing evidence of the effectiveness (or otherwise) of their crime prevention projects. Meeting the requirements for evaluation was one of the common challenges identified by project managers in the qualitative surveys:
- Project managers reported difficulties accessing data on key outcomes (such as recorded crime data for locations or individuals targeted by the project).
- Project managers were not always sure how to collect data on outcomes for participants and empirically tested data collection tools (eg participant questionnaires) were not readily available to community-based organisations.
- Several community-based organisations reported that they did not have the technical skills or capacity to collect and analyse appropriate evidence, were not best placed to undertake the evaluation and required additional support and guidance from the funding body.
- While many organisations endeavoured to contract external evaluators to evaluate their project, consultants were not always available, did not have the necessary subject matter knowledge or expertise, or were reluctant to be engaged for the life of the project for what amounted to a relatively small amount of funding.
- The definition of ‘evidence’ was regarded by some project managers as too rigid and as not recognising cultural differences, such as the use of stories and photographs by Indigenous communities.
- Funding was short-term and non-recurring and for many projects (especially those aiming to address the behaviour of individuals), evidence of a reduction or delay in criminal activity may only start to emerge years after the funding and hence after the evaluation had ended.
- There was some resistance to allocating resources to the evaluation, including investing time in regular data collection, instead of service delivery. Project managers were encouraged to allocate 10 percent of the overall project budget to evaluation, but some project managers reported spending large amounts of time and resources reporting on expenditure as part of the funding agreement—time that could have otherwise been allocated to evaluation.
This experience demonstrates that imposing conventional evaluation requirements on local organisations as a mandatory requirement within a project funding arrangement does not guarantee that project activities will be rigorously evaluated. Rather, it can increase resistance to evaluation because of the perceived additional administrative burden and may adversely impact upon the quality of service delivery. Similar problems emerge where local agencies are encouraged by a central government authority to evaluate their own project activities with only minimal technical and financial support (Anderson & Tresidder 2008).
Even when an evaluation has been successfully undertaken, another challenge for evaluators, policymakers and practitioners has been transferring the knowledge gained from research and evaluation into more effective crime prevention policy and practice. For example, few of the evaluations produced by NCCPP funding recipients have been publicly released and those that have were not widely disseminated to other grant recipients. This is not uncommon for local crime prevention programs. As a result, ineffective projects are often replicated because of the lack of readily available evidence demonstrating that the chosen approach was not the most effective, or because there is insufficient evidence to convince decision makers (Homel et al. 2004).
Recommendations for improving the level and quality of crime prevention evaluation
These challenges are not insurmountable. Several recommendations for improving the level and quality of crime prevention evaluation in Australia are outlined in the remainder of this paper. These are divided into two sections—recommendations for program managers and central agencies, and recommendations for evaluators.
Recommendations for program managers and central agencies
Improving the standard of evaluation requires a shift away from approaches that rely solely on encouraging local organisations to undertake potentially expensive and time consuming evaluations of their own work. There is a need for program managers and central agencies to be more innovative and to rethink their approach to evaluation.
Target evaluation at the areas where it will be of most benefit
In many cases, the scale of a project, the total budget available and the existing evidence base mean that the type or level of evaluation that is frequently expected is not warranted or possible (Eck 2002; Homel et al. 2007). The National Crime Prevention Framework for Australia recommends that, where crime prevention policy involves a centralised program designed to support locally delivered initiatives, evaluation should be directed towards:
- the program as a whole, including the overall impact of the program and the appropriateness, efficacy and efficiency of mechanisms used to support the delivery of crime prevention at the local level;
- individual projects or initiatives (although not necessarily all of them), providing evidence as to their effectiveness in achieving desired outcomes, including explanations as to the reasons for why what happened actually happened, as well as identifying practical challenges and lessons for implementing similar projects;
- clustered groups of projects (whether it be according to location, intervention type, crime problem, target groups etc), to draw conclusions as to the effectiveness of specific types of projects, and the relative contribution of certain contexts and project characteristics (AIC 2012).
Deciding which interventions should be subjected to more rigorous evaluation should be based on an assessment of the potential practical and policy significance of the findings, including gaps in the existing evidence base, as well as the potential for those intervention to be effectively evaluated (Lipsey et al. 2006). Consideration should be given to establishing a set of evaluation priority areas in consultation with the range of crime prevention key stakeholders within government agencies, the academic sector, local government and the community sector, such as those highlighted within the National Crime Prevention Framework (AIC 2012).
Future crime prevention programs could then reserve specific funding within the total program budget for the external evaluation of clusters of projects that address these priority areas (commissioned and managed by the central agency). This represents a more systematic approach to developing an evidence base on effective (and ineffective) crime prevention that is relevant to policymakers and practitioners. It would also help free up project managers to focus on service delivery and performance managing their work.
Monitor the performance of projects on a regular basis
Even with a more targeted approach to evaluation, project managers still need to monitor the implementation and impact of their work—focusing evaluation on certain projects should not be at the expense of effective project management or accountability. Rather, attempts to measure the impact of crime prevention efforts should involve a combination of both performance measurement and evaluation (AIC 2012). Performance measurement refers to the process of regularly collecting and monitoring performance information (in accordance with some type of agreed framework or criteria), reviewing program performance (ie using this information to assess whether a project is being implemented as planned and is meeting stated objectives), and using this information to identify where improvements might be made (Home Office 2007).
Recent experience from crime prevention programs, both in Australia and overseas, has demonstrated the potential value of effective performance measurement systems as an integral component of program development, management and evaluation processes (Homel 2006; Homel et al. 2007; Morgan & Homel 2011). To accompany, support and help to inform evaluation work (including by providing meaningful data to be used in evaluations), crime prevention program managers are advised to develop a standard performance measurement and reporting framework, supported by a comprehensive information management system (not unlike the minimum datasets established in other areas, such as health), to be consistently applied to individual projects as a common project management and reporting system (Homel 2006). These processes enable the ongoing monitoring of program delivery, both at the individual project and aggregate program level, which can inform regular improvement to the delivery of crime prevention activity.
As illustrated by a recent AIC Technical and Background Paper (Morgan & Homel 2011) reporting on the development of a model performance measurement framework for community-based crime prevention developed in 2008–09 on behalf of the WA Office of Crime Prevention, such a process is a practical possibility. This framework was developed to assist the Office of Crime Prevention and local government partners to monitor and review the ongoing performance of local community safety plans across Western Australia.
In undertaking this work, the AIC examined local government practice domestically and internationally to identify important lessons to be applied to the performance measurement framework for local crime prevention plans. It was concluded that:
- there has been some attempt to implement these sorts of systems overseas, with varying degrees of success, which provided important lessons for crime prevention programs in Australia;
- there is very little precedent for systematic approaches to program-wide performance measurement in local crime prevention in an Australian context;
- there is a need to establish systematic and consistent data collection mechanisms relating to clearly defined performance criteria to improve the availability of reliable data to regularly monitor the implementation and effectiveness of local crime prevention, and to support future evaluations;
- the value and importance of performance measurement must be demonstrated and communicated to key stakeholders to ensure it is supported; and
- there must be strategies to address resource constraints and training and development to ensure adequate knowledge and skills exist to support the framework (Morgan & Homel 2011).
The performance measurement framework developed for use in Western Australia was designed to be integrated into a wider evaluation strategy for local community crime prevention and to reflect the typical information and reporting requirements of both central and local governments. It was also intended to provide a key program management resource for both levels of government to use on a regular basis to ensure that projects and plans remained on track and consistent with wider crime prevention goals and objectives. Further work is currently being undertaken to determine whether embedding this type of approach within crime prevention programs can enhance program delivery and demonstrate program benefits in the way that it has the potential to do.
Establish mechanisms to effectively manage and support evaluation work
Sound governance arrangements are as crucial to evaluation as they are to the successful implementation of crime prevention programs and policies (Homel & Homel 2012). This requires a clear strategy that reflects the central agency’s commitment to evaluation, as well as evaluation priorities (consistent with the targeted approach described above) and their approach to conducting, managing and supporting evaluation work.
Past experience suggests that evaluation is best undertaken as a separate but parallel activity related to the delivery of the program (Homel 2006). For this reason and consistent with other areas of government (eg criminal justice, health and child protection), many state and territory crime prevention agencies have recognised the value of establishing a separate evaluation unit that sits alongside policy and program management teams. Dedicated evaluation units can be staffed with skilled personnel with expertise in evaluation, while the links to policy and program management ensure that evaluation is aligned with agency priorities and that findings from evaluation can be easily shared and used to inform decision making.
In terms of supporting evaluation work, central agencies should give consideration to the existing capacity and potential needs of those likely to be entrusted with the responsibility for evaluation and then make an assessment as to what support is required. This support might include:
- encouraging community-based organisations to undertake or sponsor evaluation work, such as making it a requirement of their funding agreement;
- appointing qualified personnel to undertake high-quality evaluation studies on behalf of community-based organisations as an integral feature of program design;
- reviewing evaluation proposals from grant recipients and local partners, and providing input into evaluation design and methodologies developed by community-based organisations;
- providing guidance and technical support to community-based organisations entrusted with evaluation, both in developing the methodology and throughout the project implementation and evaluation cycle; and
- providing training and resources that help to build the capacity of those involved in evaluation and performance measurement (Homel 2009b; Lipsey et al. 2006; Morgan et al. 2012).
This can include collaborative arrangements that encourage the involvement of researchers in undertaking and supporting crime prevention evaluation, as well as to help support the integration of research into practice. For example, the Stronger Families Learning Exchange was established as a key part of the Australian Government’s Stronger Families and Communities Strategy 2000–2004. It involved a team of researchers providing evaluation advice and guidance to funding recipients, encouraging and supporting project managers to regularly review progress and make changes to improve performance. The evaluation of the overall strategy concluded that the Stronger Families Learning Exchange and the supported provided was valued by funding recipients and had been successful in developing a strong evaluation ethos (RMIT University CIRCLE 2008).
Equally important is the need to develop appropriate mechanisms to ensure that regular feedback on findings from evaluation can be passed back to program managers to support continued program improvement (Homel et al. 2004). This includes the findings from both performance measurement processes and evaluations (interim and final results).
Recommendations for evaluators
There is also scope to improve the methods through which crime prevention is evaluated. Specific recommendations for evaluators working in crime prevention are described below.
Evaluate process and outcomes
Where possible, the evaluation of crime prevention strategies should incorporate both process and outcome evaluation (Weatherburn 2009). In some cases, such as initiatives that begin as a small-scale pilot or are in the initial stages of implementation, it may be beneficial to conduct a process evaluation (providing valuable information to improve program delivery) followed by an outcome evaluation. In others, a process and outcome evaluation can be undertaken simultaneously (and can overlap both in terms of evaluation questions and methods).
A process evaluation can determine whether an intervention has implementation fidelity. Implementation fidelity refers to the extent to which an intervention was implemented in accordance with its original design, whether the required dosage of the intervention has been delivered, the overall quality of intervention delivery and the extent to which participants are engaged and involved in the program (Mihalic et al. 2004). Assessing implementation fidelity is important because this can help to explain why certain outcomes are or are not observed. It can also identify valuable lessons for implementing similar interventions in the future, helping to avoid implementation failure—something that has proven to be a significant issue impacting upon the effectiveness of crime prevention programs and initiatives, both in Australia and overseas (Homel 2009a; Sutton, Cherney & White 2008; Tilley 2009).
Plan evaluation early and adopt a systematic approach
Irrespective of whether a process and/or outcome evaluation is being undertaken, it is important for the evaluation design and research methods to be determined early in the life of the project (AIC 2012; Weatherburn 2009). For some initiatives, particularly where there is a lack of existing evidence, the need to evaluate may influence decisions about the design or implementation of that initiative—such as delivering the project in one area so that observed outcomes can be compared with areas not subject to the intervention. Unfortunately, it is common for evaluation to be an afterthought, which poses numerous challenges for the measurement of key outcomes, such as the lack of appropriate baseline measures.
Experience also shows that evaluators should adopt a systematic approach to evaluation. One approach that has been used extensively by the AIC (and many other evaluators) has been to develop an evaluation framework that guides the evaluation and keeps it focused. The basic steps involved in developing an evaluation framework and conducting the evaluation are described in Figure 1.
Importantly, the first step in this process involves documenting important information about the project being evaluated. As well as project objectives, information should be recorded about the problem being addressed, the theory or logic that underpins the project, a description of the activities being delivered, information about the people or places being targeted by the project and the stage of implementation that the project is at (Tomison 2000). This information will help guide important decisions about the evaluation approach and methods.
Figure 1: Steps in conducting a systematic evaluation
1. Identify the main objectives of the crime prevention project you are evaluating
2. Prepare a logic model that describes, in a logical order, the project inputs, activities, outputs, short and long-term outcomes
3. Determine the purpose of the evaluation, identify key stakeholders and determine the scope and constraints for the evaluation
4. Identify the questions that need to be answered as part of the evaluation
5. Decide upon an appropriate research design and identify possible data collection methods (including new and existing data)
6. Develop an evaluation framework that links the various components of the logic model (and targeted evaluation questions) with qualitative and quantitative performance indicators that can be measured using the proposed data collection methods
7. Conduct the evaluation, collecting and analysing data in accordance with the evaluation framework and clearly reporting the results
8. Disseminate evaluation findings and use these findings to inform decision making
Aim to determine how and in what circumstances interventions work
A number of highly experienced evaluators have argued that there are limitations to relying upon findings from evaluations that adopt a strictly experimental approach to assessing the effectiveness of crime prevention interventions (Eck 2005; Pawson 2006; Pawson & Tilley 1997). In particular, this approach has been criticised for an emphasis on internal validity (ie attributing causation to the intervention in question) at the expense of external validity (ie generalisability of the study to others) and for not giving adequate consideration to the mechanisms that underpin crime prevention interventions or the context in which these mechanisms are applied (Eck 2005; Pawson & Tilley 1997; Tilley 2009).
In order to assess whether a strategy can be adapted to other circumstances, evaluations should combine methods that seek to identify what works in crime prevention through rigorous scientific methods and those that place an emphasis on developing a more detailed understanding of good practice and what can be done and in what circumstances to prevent crime (Liddle 2010; Morgan et al. 2012; Tilley 2009). This means being explicit about the theory that underpins a particular approach and then using evaluation to test that theory. There are different ways of approaching this task, including using realist evaluation approaches (specifying and evaluating one or more context, mechanism and outcome configuration) or program logic (developing a model that describes, in a logical order, the project inputs, activities, outputs, short and long-term outcomes). Neither approach is incompatible with rigorous experimental research designs, which remain the sought after standard for outcome evaluations (Eck 2005; Weatherburn 2009).
Make better use of weak evaluations
This paper has already demonstrated that many crime prevention evaluations do not meet the criteria for drawing strong conclusions about the effectiveness of the intervention being evaluated. Many of the barriers to the rigorous evaluation of project outcomes may be overcome by adopting the recommendations also described in this paper.
However, several practical challenges will continue to hinder efforts to ensure crime prevention evaluations (particularly evaluations of small-scale initiatives) meet the standard for drawing conclusions about the effectiveness of an intervention. Some authors have suggested that rigorous adherence to experimental research design is not necessary or optimal in the evaluation of situational crime prevention initiatives, particularly given the practical challenges associated with finding similar geographic areas suitable for comparison (Eck 2002; Knutsson & Tilley 2009).
Similarly, the nature of some interventions enables comparison groups to be more readily identified. For example, in evaluating interventions that draw participants from an established institution, such as the criminal justice system, schools or hospitals, administrative data relating to key outcomes for comparison groups may be more readily available to evaluators because there are processes in place to collect that information. Conversely, interventions that draw participants from community settings (such as youth drop-in centres or recreational facilities and services), or that occur outside of established institutions may be less likely to have systems in place to collect administrative data and the organisations involved may have limited access to data that might be available from other sources. As such, they may be less amenable to evaluations that employ experimental research designs.
However, when viewed collectively, these ‘weak’ evaluations still have the potential to provide valuable information for decision makers (Eck 2002). Eck recommends that for internal evaluations and small-scale projects with relatively modest objectives, there is arguably greater value in focusing on more thorough analysis of the problem, being more explicit in terms of the theory about how an intervention should work and then testing that hypothesis by conducting simple pre-post and short-term time series studies to measure whether the intervention is having the desired effect. That way, projects can be refined until the desired outcomes are observed. Similarly, Morgan et al. (2012) argued that the accumulation of these weaker studies, despite their obvious drawbacks, can still provide a valuable evidence base about the implementation and possible impact of otherwise untested initiatives.
Measure short and long-term outcomes
The long-term nature of many crime prevention programs, particularly those that attempt to address the underlying social determinants of crime such as education, employment and housing access, means that outcomes may not be delivered until some time after the intervention has been delivered. Other projects may be targeted at specific problems in certain locations or among specific populations, such that they are too small in scale to have an impact on aggregate community-level indicators (such as crime rates). Program managers and evaluators therefore need to be realistic about what outcomes can be delivered in the timeframe available for the evaluation and design their performance indicators and research methods accordingly. This may not always include a direct impact on crime rates or other community-level indicators (Tomison 2000). Instead, evaluation resources may be better invested in measuring the short-term or immediate impact of a program on participants and/or the community being targeted.
Large-scale programs that involve a significant financial investment warrant longer term evaluation, designed early in the program development phase to ensure the data is available to measure change over time. Regular reporting as part of a performance measurement system can enable short-term progress to be monitored, while investment in rigorous research designs and methods, such as the collection of longitudinal data for an intervention and matched comparison group, can help determine the long-term impact on individuals and communities (Schweinhart 2004; Weatherburn 2009). It is also possible to adopt innovative approaches to evaluation, including the application of appropriate modelling and forecasting methodologies, where long-term follow-up is not possible (eg Homel et al. 2006).
Include an economic assessment where possible
Lastly, some evaluations may warrant the inclusion of an economic assessment of a program’s impact. Techniques such as cost-benefit and cost-effectiveness analysis can help demonstrate the financial value of a crime prevention strategy, which can be a persuasive tool when it comes to arguing for the allocation of resources to a particular strategy (Dossetor 2011). To date, there have been few examples of this type of assessment conducted in Australia, but there is evidence from overseas demonstrating that the use of economic evaluations to inform public policy can deliver both crime reduction benefits and financial savings to the criminal justice system (Aos et al. 2011; Dossetor 2011). Prominent examples from Australia include the evaluation of the Pathways to Prevention program in Queensland (Homel et al. 2006) and the evaluation of Operation Burglary Countdown in Western Australia (Cummings 2006). Both evaluations demonstrated that there were substantial financial benefits associated with crime prevention and as a result, were influential in convincing decision makers to invest further funding in expanded programs.
Conclusion
The purpose of this paper has been to highlight a number of important recommendations for evaluating crime prevention, drawing on the experiences of several Australian crime prevention programs that have been reviewed by the AIC. The challenges that have been described are not unique to crime prevention, as evaluation is often overlooked in favour of focusing on delivering important programs and services. As such, the recommendations presented in this paper have broader application, insofar as they apply equally to other sectors that utilise similar modes of delivery to that of the crime prevention sector (eg community grants programs, central-regional partnerships etc). Adopting these recommendations will serve to increase the level and quality of evaluation, which will improve the evidence base available to decision makers and in-turn lead to better informed and more effective crime prevention strategies.
References
All URLs correct May 2013
- Anderson J & Homel P 2005. Reviewing the New South Wales local crime prevention planning process. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/archive/archive-67
- Anderson J & Tresidder J 2008. A review of the Western Australian community safety and crime prevention planning process: Final report. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/archive/archive-98
- Aos S et al. 2011. Return on investment: Evidence-based options to improve statewide outcomes. Olympia: Washington State Institute for Public Policy. http://www.wsipp.wa.gov/rptfiles/11-07-1201.pdf
- Attorney-General’s Department (AGD) 2011. Overview of the Proceeds of Crime Act 2002. http://www.crimeprevention.gov.au/POCAfundingfornongovernmentsgencies/Pages/default.aspx
- Australian Institute of Criminology (AIC) 2012. National crime prevention framework. Canberra: AIC. https://www.aic.gov.au/publications/special/special
- Bodson J et al. 2008. International report on crime prevention and community safety: Trends and perspectives, 2008. Canada: International Centre for the Prevention of Crime. http://www.crime-prevention-intl.org/en/publications/report/report/article/international-report-on-crime-prevention-and-community-safety.html
- Cameron M & Laycock G 2002. Crime prevention in Australia, in Graycar A & Grabowsky P (eds), The Cambridge handbook of Australian criminology. Melbourne: Cambridge University Press: 313–331
- Cherney A & Sutton A 2007. Crime prevention in Australia: beyond ‘what works’? The Australian and New Zealand Journal of Criminology 40(1): 65–81
- Clancey G 2010. Considerations for establishing a public CCTV network. Research in Practice no. 8. https://www.aic.gov.au/publications/rip/rip8
- Crime Prevention Queensland (CPQ) 1999. Queensland crime prevention strategy: Building safer communities. Brisbane: Department of Premier and Cabinet
- Cummings R 2006. ‘What if’: The counterfactual in program evaluation. Evaluation Journal of Australasia 6(2): 6–15
- Dossetor K 2011. Cost-benefit analysis and its application to crime prevention and criminal justice research. Technical and background paper no. 42. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/tbp/tbp42
- Eck J 2005. Evaluation for lesson learning, in Tilley N (ed), Handbook of crime prevention and community safety. Cullompton: Willan Publishing: 699–733
- Eck J 2002. Learning from experience in problem-oriented policing and situational prevention: The positive functions of weak evaluations and the negative functions of strong ones, in Tilley N (ed), Evaluating crime prevention. Monsey, NY: Criminal Justice Press
- UN Council for Economic and Social Development (ECOSOC) 2002. Guidelines for the prevention of urban crime (resolution 1995/9, Annex), the Guidelines for the Prevention of Crime (resolution 2002/13, Annex). http://www.un.org/documents/ecosoc/res/1995/eres1995-9.htm
- English B, Cummings R & Stratton R 2002. Choosing an evaluation model for community crime prevention programs, in Tilley N (ed), Evaluation for crime prevention. Monsey, NY: Criminal Justice Press: 119–169. http://www.popcenter.org/library/crimeprevention/volume_14/05-English.pdf
- Farrington DP & Welsh BC 2006. Family-based crime prevention, in Sherman LW, Farrington DP, Welsh BC & MacKenzie DL (eds), Evidence-based crime prevention. London: Routledge: 22–55
- Farrington DP, Gottfredson DC, Sherman LW & Welsh BC 2006. The Maryland scientific methods scale, in Sherman LW, Farrington DP, Welsh BC & MacKenzie DL (eds), Evidence-based crime prevention. London: Routledge: 13–21
- Gant F & Grabosky P 2000. The promise of crime prevention, 2nd ed. Research and public policy series no. 31. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/rpp/rpp31
- Grabosky P & James M 1995. The promise of crime prevention. Research and public policy series no. 1. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/rpp/rpp1
- Henderson M & Henderson P 2002. Good practice features of community crime prevention models: Report to Crime Prevention Queensland. Brisbane: Department of Premier and Cabinet
- Home Office 2007. Delivering safer communities: A guide to effective partnership working. London: Home Office.
- Homel P 2009a. Lessons for Canadian crime prevention from recent international experience. IPC Review 3: 13–39.
- Homel P 2009b. Improving crime prevention knowledge and practice. Trends & Issues in Crime and Criminal Justice no. 385. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/tandi/tandi385
- Homel P 2006. Joining up the pieces: What central agencies need to do to support effective local crime prevention, in Knutsson J & Clarke R (eds), Putting theory to work: Implementing situational prevention and problem-oriented policing. Crime Prevention Studies no. 20. New Jersey: Prentice Hall: 111–139
- Homel P 2005. A short history of crime prevention in Australia. Canadian Criminal Journal of Criminology and Criminal Justice 47(2): 355–368
- Homel P, Morgan A, Behm A & Makkai T 2007. The review of the National Community Crime Prevention Programme: Establishing a new strategic direction. Report to the Australian Attorney-General’s Department
- Homel P, Nutley NS, Tilley N & Webb B 2004. Investing to deliver: Reviewing the implementation of the UK Crime Reduction Programme. Home Office Research Study no. 281. London: Home Office.
- Homel R 2005. Developmental crime prevention, in Tilley N (ed), Handbook of crime prevention and community safety. Cullompton: Willan Publishing: 71–106
- Homel R et al. 2006. The pathways to prevention project: The first five years 1999–2004. Sydney: Mission Australia and the Key Centre for Ethics, Law, Justice & Governance, Griffith University.
- Homel R et al. 1999. Pathways to prevention: Developmental and early intervention approaches to crime in Australia. Canberra: Attorney-General’s Department. http://www.crimeprevention.gov.au/Publications/EarlyIntervention/Pages/Pathways_to_Prevention_Full_Report.aspx
- Homel R & Homel P 2012. Implementing crime prevention: Good governance and a science of implementation, in Welsh BC & Farrington DP (eds), Oxford handbook on crime prevention. Oxford: Oxford University Press: 423–445
- Hope T 2005. Pretend it doesn’t work. European Journal on Criminal Policy and Research December 11(3–4): 275–296
- Hope T 1995. Community crime prevention, in Tonry M & Farrington DP (eds), Building a safer society: Strategic approaches to crime prevention. Chicago: the University of Chicago Press: 21–90
- Idriss M, Jendly M, Karn J & Mulone M 2010. International report on crime prevention and community safety: Trends and perspectives, 2010. Montreal: International Centre for the Prevention of Crime.
- Institute for the Prevention of Crime (IPC) 2008. What is crime prevention? Canada: University of Ottawa.
- Knutsson K & Tilley N 2009. Introduction, in Knutsson J & Tilley N (eds), Evaluating crime reduction initiatives. New Jersey: Prentice Hall: 1–6
- Liddle M 2010. Reviewing the effectiveness of community safety policy and practice: An overview of current debates and their background, in Idriss M, Jendly H, Karn J & Mulone S (eds), International report on crime prevention and community safety: Trends and perspectives, 2010. Montreal: International Centre for the Prevention of Crime
- Lipsey MW, Petrie C, Weisburd D & Gottfredson D 2006. Improving evaluation of anticrime programs. Journal of Experimental Criminology 2: 271–307
- Manning M, Homel R & Smith C 2010. The effects of early developmental prevention programs in at-risk populations on non-health outcomes in adolescence. Children and Youth Services Review 32: 506–519
- Mihalic S, Irwin K, Fagan A, Ballard D & Elliot D 2004. Successful program implementation: Lessons from blueprints. US: Department of Justice. https://www.ncjrs.gov/pdffiles1/ojjdp/204273.pdf
- Morgan A 2011. Police and crime prevention: Partnering with the community, in Putt J (ed), Community policing in Australia. Research and Public Policy series no. 111. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/rpp/rpp111
- Morgan A, Boxall H, Lindeman K & Anderson J 2012. Effective crime prevention strategies for implementation by local government. Research and Public Policy series no. 120. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/rpp/rpp120
- Morgan A & Homel P 2011. Model performance framework for local crime prevention. Technical and Background Paper no. 40. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/tbp/tbp40
- NSW Department of Premier and Cabinet (NSW DPC) 2008. NSW crime prevention framework: Strengthening, focusing and coordinating crime prevention in NSW. Sydney: Department of Premier and Cabinet.
- NZ Ministry of Justice 2003. Review of the safer community council network: Future directions. NZ: Ministry of Justice
- Office of Crime Prevention (OCP) 2004. Preventing crime: State community safety and crime prevention strategy. Perth: Office of Crime Prevention
- Pawson R 2006. Evidence-based policy: A realist perspective. London: Sage Publications Ltd
- Pawson R & Tilley N 1997. Realistic evaluation. London: Sage
- Pugh J & Saggers S 2007. Snapshot of Western Australian local government community development programs and indicators, April 2007. Perth: Edith Cowan University Centre for Social Research.
- RMIT University. Collaborative Institute for Research, Consulting and Learning in Evaluation (CIRCLE) 2008. Evaluation of the stronger families and communities strategy 2000–2004: Final report. Melbourne: RMIT University.
- Schweinhart LJ 2004. The High/Scope Perry Preschool study through age 40: Summary, conclusions, and frequently asked questions. Ypsilanti, MI: High/Scope Educational Research Foundation
- Sherman LW, Farrington DP, Welsh BC & MacKenzie DL 2006. Evidence-based crime prevention, 2nd ed. London: Routledge
- Sutton A, Cherney A & White R 2008. Crime prevention: Principles, perspectives and practices. Melbourne: Cambridge University Press
- Tilley N 2009. Crime prevention. Cullompton, Devon: Willan Publishing
- Tomison A 2000. Evaluating child abuse prevention programs. Issues in Child Abuse Prevention no. 12. Melbourne: Australian Institute of Family Studies. http://www.aifs.gov.au/nch/pubs/issues/issues12/issues12.html
- United Nations Evaluation Group (UNEG) 2005. Norms for evaluation in the UN System. Vienna: UNEG. http://www.uneval.org/papersandpubs/documentdetail.jsp?doc_id=21
- United Nations Office of Drugs and Crime (UNODC) 2010. Handbook on the crime prevention guidelines: Making them work. Vienna: UNODC. http://www.unodc.org/documents/justice-and-prison-reform/crimeprevention/10-52410_Guidelines_eBook.pdf
- Weatherburn D 2004. Law and order in Australia: Rhetoric and reality. Leichhardt, NSW: The Federation Press
- Weatherburn 2009. Policy and program evaluation: Recommendations for criminal justice policy analysts and advisors. Crime and Justice Bulletin no 133. http://www.lawlink.nsw.gov.au/lawlink/bocsar/ll_bocsar.nsf/vwFiles/CJB133.pdf/$file/CJB133.pdf
- Welsh B & Hoshi A 2006. Communities and crime prevention, in Sherman L, Farrington D, Welsh B & Layton Mackenzie D (eds), Evidence-based crime prevention. London: Routledge: 165–197
- Willis K & Fuller G 2012. Evaluation of the Proceeds of Crime Act 2002 funding program. Report to the Australian Attorney-General’s Department
- Wilson D & Sutton A 2003. Open-street CCTV in Australia. Trends & Issues in Crime and Criminal Justice no. 271. Canberra: Australian Institute of Criminology. https://www.aic.gov.au/publications/tandi/tandi271
About the authors
Anthony Morgan is a Principal Research Analyst at the AIC.
Peter Homel is Principal Criminologist—Crime Prevention at the AIC and is also a Professor at Griffith University’s Key Centre for Ethics Law Justice and Governance.