GLOSSARY

Acceptability

The extent to which the stakeholders find the strategy satisfactory or agreeable (http://www.ojp.usdoj.gov/BJA/evaluation/glossary/External Link.

Archival data analysis

Archival data is information that has already been collected and/or documented. It can include records that are kept by governmental and other agencies, as well as records normally kept as part of the operation of an institution or organization. (http://www.sdrg.org/ctcresource/Community%20Assessment%20Training/Trainer%20Guide/CAT_TG_mod3.pdfExternal Link

Best Available Research Evidence

Best available research evidence enables researchers, practitioners, and policy-makers to determine whether or not a prevention program, practice, or policy is actually achieving the outcomes it aims to and in the way it intends. The more rigorous a study’s research design, the more compelling the research evidence, indicating whether or not a program, practice, or policy is effectively preventing violence (Puddy & Wilkins, 2011).

Capacity assessment

Process to identify those particular areas of capacity that are strongest and those that need improvement (http://www.vppartners.org/sites/default/files/reports/assessment.pdf External Link).

Communities of Practice

This concept is inclusive of the many ways that people with knowledge and experiences around a specific issue gather to share and collect their insight with a common goal in mind. It could range from something as informal as a listserv to a highly structured working group.

Consensus

The production of a common understanding among participants about issues and programs (http://www.ojp.usdoj.gov/BJA/evaluation/glossary/ External Link)

Contextual Evidence

Contextual Evidence refers to information about whether or not a strategy “fits” with the context in which it is to be implemented. In other words, contextual evidence provides prevention practitioners with information on whether a strategy is feasible to implement, is useful, and is likely to be accepted by a particular community (CHSRF, 2005; SAMHSA, 2007; Victoria et al., 2004).

Credible

The source of the information contributes to how worthy it is of belief when compared to external (who and where it comes from) and internal (independent knowledge of the subject) criteria.

Economic analysis

A systematic approach to determining the best use of scarce resources. The process typically involves measuring in monetary terms the private and social costs and benefits of a particular strategy to the community or economy. (http://www.businessdictionary.com/definition/economic-analysis.html External Link).

Experiential Evidence

Experiential Evidence is the collective experience and expertise of those who have practiced or lived in a particular setting. It also includes the knowledge of subject matter experts. This insight, understanding, skill, and expertise is accumulated over time and is often referred to as intuitive or tacit knowledge (Orleans et al, 1999).

Expert Panels

Multiple subject matter experts brought together to share experiences on what has “worked” for them in putting their knowledge into practice.

Feasibility

The applicability or practicability of a proposed action or plan (http://www.ojp.usdoj.gov/BJA/evaluation/glossary/ External Link).

Fidelity

Fidelity is the degree to which a program, practice, or policy is conducted in the way that it was intended to be conducted. This is particularly important during replication, where fidelity is the extent to which a program, practice, or policy being conducted in a new setting mirrors the way it was conducted in its original setting.

Influence mapping

Influence mapping is a process that involves identifying the individuals and groups with the power to effect a key decision, including the position and motives of each person and the best channels through which to communicate with them. The approach is also known as Stakeholder Influence Mapping, Power Mapping or the Arena of Influence (http://www.odi.org.uk/resources/details.asp?id=5697&title=influence-mapping-stakeholder-analysis External Link).

Needs assessment

Conducting a needs assessment involves a systematic process for determining and addressing needs, or "gaps" between current conditions and desired conditions or "wants" (http://www.adprima.com/needs.htm External Link Kizlik, B., "Needs Assessment Information", ADPRIMA, accessed 16 October 2010).

Observable

Able to be perceived by those not directly involved in the process (http://www.ojp.usdoj.gov/BJA/evaluation/glossary/ External Link).

Paraprofessional

An individual who is not a member of a profession but works under the supervision of a teacher or another professional staff member who has the ultimate responsibility for the design, implementation, and evaluation of education programs and related services. (National Resource Center for Paraprofessionals)

Policy network mapping (PNM)

PNM is a process to identify policy networks and map network structures, using qualitative and quantitative methods to reveal linkages between actors (http://jtp.sagepub.com/content/10/4/389.short External Link).

Political climate

Political climate refers to the overall opinion of a population on political issues. For example, on sensitive issues (e.g. sex education in schools), a community/population served may tend to share similar perspectives or opinions.

Quasi-Experimental Designs

Experiments based on sound theory, and typically have comparison groups (but no random assignment of participants to condition), and/or multiple measurement points (e.g., pre-post measures, longitudinal design).

Randomized Control Trial

A trial in which participants are assigned to control or experimental (receive strategy) groups at random, meaning that all members of the sample must have an equal chance of being selected for either the control or experimental groups (i.e.. Flipping a coin, where “heads” means participants are assigned to the control group and “tails” means they are assigned to the experimental group). This way, it can be assumed that the two groups are equivalent and there are no systematic differences between them, which increases the likelihood that any differences in outcomes are due to the program, practice, or policy and not some other variable(s) that the groups differ on.

Reflective Questions

Questions that elicit memories, experiences, feelings, and reactions. (ICA, 2000)

Replicable

The methods used to gather the information would yield the same results if used again.

Rigorous

Extremely thorough adherence to strict rules or discipline to ensure as accurate results as possible.

Secondary Sources

Pre-existing sources of data that were not collected by the user. Examples could be data collected for similar studies to use for comparison, and large datasets.

Stakeholder analysis

Stakeholder analysis is a process of systematically gathering and analyzing qualitative information to determine whose interests that should be taken into account when developing and/or implementing a policy or program (http://www.eestum.eu/voorbeelden/Stakeholders_analysis_guidelines.pdf External Link).

Systematic Review

The assembly, critical appraisal, and synthesis of all relevant studies of a specific program, practice, or policy in order to assess its overall effectiveness, feasibility, and “best practices” in its implementation.

Systematically obtained

Data collected by methods that are thorough, methodical, and analytical rather than in an arbitrary, unplanned, or haphazard way.

Team Decision-Making Process

Activities with the purpose of getting to the collective knowledge of the group for the purposes of making a decision.

Utility

The state or quality of being useful to stakeholders.

Verifiable

The extent to which something can be tested for accuracy.