Original Article

The Self-Assessment for Organizational Capacity Instrument for Evidence-Informed Health Policy: Preliminary Reliability and Validity of an Instrument Cristina Catallo, RN, PhD • Souraya Sidani, RN, PhD

Keywords

health policy, evidence-informed

policy, policymakers, organizational

capacity, instrument validation, reliability,

validity

ABSTRACT Background: Health policymakers work in organizations that involve multiple competing de- mands and limited time to make decisions. Influential international policy organizations continue to publish guidance and recommendations without the use of high-quality research evidence. Few studies have described the process with which governments, including health ministries, use evidence to support health policymaking decisions. Research is needed to better understand the psychometric properties of instruments that assess health policy organizations’ capacity to use research evidence.

Aim: The purpose of this study was to assess the preliminary psychometric properties of an instrument which assesses organizational capacity for evidence use.

Methods: The instrument was administered by telephone survey from January to June 2011 using a purposeful sample of 57 Canadian health policymakers (policy analyst and senior management levels). Reliability of the instrument was assessed with Cronbach’s α coefficient and item-to-total correlation for internal consistency; interitem coefficients were used to identify particular item redundancy. Discriminant validity was assessed using the known-group comparison approach, with the independent sample t-test to assess subscale responses of policy analysts and senior managers.

Findings: Cronbach’s α indicated acceptable internal consistency across its subscales. Discrim- inant validity analysis revealed a statistically significant difference between policy analysts and senior managers for one subscale.

Linking Evidence to Action: Our study provides a first look at the Self-assessment for Orga- nizational Capacity Instrument’s psychometric properties and demonstrates that this instrument can be useful when evaluating government and other organizations’ use of evidence to inform decision making. Further testing of this instrument is needed using large and varied samples of policymakers, from policy analysts to senior managers, across varied policymaking contexts. This instrument can be a starting point for government and related organizations to better understand how well it supports evidence use, including its acquisition, appraisal, and use in health policy decision making.

BACKGROUND The organizational environment of the health policymaker in- volves multiple competing demands and limited time to make decisions that have the potential to have tremendous impact on health care systems and practice. Because of these demands, policymakers need timely access to reliable, high-quality, syn- thesized research evidence (Lavis, 2006, 2009; Lavis, Lomas, Hamid, & Sewankambo, 2006). Policymakers and other deci- sion makers may recognize the need for research evidence to inform health policy, but may lack the time, skills, or resources to find and apply the evidence throughout the decision-making

process (Jewell & Bero, 2008; Lavis, 2006; Lavis et al., 2006). International policy organizations, such as the World Health Organization, have recommended that policymaking organiza- tions build systems to support greater research use in the devel- opment and reform of health policy (Lavis, Oxman, Moynihan, & Paulsen, 2008; World Health Organization, 2004). Health policymakers as well as other health care system decision makers, including nursing administrators, are expected to uti- lize research evidence to inform daily decisions. Considering two prominent international health policy organizations, the World Health Organization and the World Bank, authors have

Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. 35 C© 2013 Sigma Theta Tau International

Evidence-Informed Health Policy Instrument Assessment

identified that high-quality research evidence is not consis- tently used in their published recommendations and guide- line statements (Hoffman, Lavis, & Bennet, 2009; Oxman, Lavis, & Fretheim, 2007). Making policy decisions with the absence of evidence can have deleterious effects, such as the implementation of programs and services that may not be effective in achieving desired results or distributed in a way that maximizes resources (Oxman, Vandvik, Lavis, Fretheim, & Lewin, 2009). Theoretical frameworks on research evidence use have started to consider the impact of organiza- tional features on the adoption of evidence for decision. Organi- zational structure, values and beliefs, resources, and leadership style are examples of organizational features (Beyer & Trice, 1982; Graham & Logan, 2004; Kitson, Harvey, & McCormack, 1998). Despite these theories, little is known about the impact of the organization on evidence-informed health policy.

Within governments and the health care organizations that support government, the capacity and resources needed for an organization to support evidence use is not clearly under- stood (Kothari, Edwards, Hamel, & Judd, 2009). Some orga- nizations have created tools to support capacity development for decision makers housed within the organization, such as the National Collaborating Centre’s Registry of Methods and Tools, which provides critically appraised methods and tools for public health managers and policymakers interested in implementing knowledge translation interventions (National Collaborating Centre for Methods and Tools, 2009; Peirson, Catallo, & Chera, 2013). Other organizations have focused on capacity development of an organization as a whole to uti- lize evidence, such as the Canadian Foundation for Healthcare Improvement’s (formerly the Canadian Health Services Re- search Foundation’s) self-assessment tool “Is research work- ing for you?” (Canadian Foundation for Healthcare Improve- ment [CFHI], n.d.; Kothari et al., 2009; Thornhill, Judd, & Clements, 2009). While this tool examines how organizations acquire, assess, adapt, and apply evidence, it does not look at how the organizational environment supports the knowl- edge translation process, such as those activities that “push” evidence out to users or “pull” users to seek research from or- ganizations (Oxman et al., 2009). Likewise, an organization’s structures and processes can facilitate or inhibit the internal use of evidence. Specifically, organizational culture can im- pact how well individuals employed at these organizations can apply evidence to policy-relevant issues (Kothari et al., 2009; Oxman et al., 2009). While some tools exist to help individuals evaluate their ability to assess research utilization, few have looked at the processes and routines involving research evi- dence use within an organization (Kothari et al., 2009). One instrument that attempts to address these gaps is the Self- assessment of Organizational Capacity to Support the Use of Research Evidence to Inform Decisions Instrument (Oxman et al., 2009). This instrument was designed to aid organiza- tions in their assessment of capacity to use research evidence, and to help improve in areas where capacity is lacking. The content of the instrument is based on the CFHI’s (n.d.) tool

“Is research working for you?” and expanded upon to reflect efforts to link research-to-knowledge translation activities such as: push, user pull, exchange, and integrated strategies (Oxman et al., 2009). The instrument is part of the SUPPORT Tools for evidence-informed health policymaking series by the Support- ing Policy relevant Reviews and Trials (SUPPORT) project, an international collaboration funded by the European Commis- sion’s 6th Framework (http://www.support-collaboration.org; Lavis, Oxman, Lewin, & Fretheim, 2009). The Self-assessment of Organizational Capacity to Support the Use of Research Evi- dence to Inform Decisions Instrument includes seven domains related to supporting the use of evidence to inform decisions: organizational culture and values; setting priorities for obtain- ing research evidence; obtaining research evidence; assessing the quality and applicability of research evidence and inter- preting the results to inform priority decisions; using research evidence to inform recommendations and decisions; moni- toring and evaluating policies and programs; and supporting continuing professional development on evidence-based topics (Oxman et al., 2009). Limited research is available discussing the organizational capacity of ministries for evidence use in health policymaking. The instrument has been assessed for its content and revised accordingly after use in a variety of workshops with different groups; however, its psychometric properties have not been evaluated.

AIMS The purpose of this study was to provide a first assessment of the psychometric properties of the instrument Self-assessment of Organizational Capacity to Support the Use of Research Evidence to Inform Decisions (thereafter referred to as the Self-assessment for Organizational Capacity Instrument). The specific objectives were to examine the internal consistency of the instrument’s subscales reflecting the seven aspects of or- ganizational capacity for research evidence use, and to explore its discriminant validity using the known group comparison approach. Data were obtained from a sample (n = 57) of Cana- dian health policymakers, including policy analysts and senior managers. It was hypothesized that policy analysts, who are responsible for incorporating evidence into documents, and senior decision makers who implement leadership mandates and are involved in strategic planning for the organization, would have different perspectives related to the organization’s capacity for evidence use. For each of the Self-assessment for Organizational Capacity Instrument’s domains, we anticipated that senior managers would be more familiar with the or- ganization’s mission-supporting evidence use, processes for priority setting for priority use, the organization’s ability to use research evidence to inform decisions, and monitoring and evaluation of policies and programs when compared to policy analysts. Although very little is written about the func- tions of senior managers in government versus a more junior role such as the policy analyst, Oxman et al. (2007) indirectly describe how the roles within a government organization differ

36 Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. C© 2013 Sigma Theta Tau International

Original Article with senior managers more involved with activities related to commissioning expert committees to review a health care is- sue and assimilate evidence to support recommendations. We hypothesized that policy analysts and senior managers would have similar perceptions of how well the organization obtains research evidence, assesses research evidence quality and appli- cability, and supports continuing education on evidence-based topics. Much has been written about policymakers’ focus on the technical aspects of policymaking, often without the use of evidence. However, previous work has lacked the distinc- tion between the perceptions of senior management versus policy analysts. Jewell and Bero (2008) in their study of se- nior level policymakers and legislative officials identified that there is limited ability to collect and evaluate research due to limited resources, a lack of skills to find and appraise research evidence, and an overall incongruence between organizational mandates and the requirements of evidence-based practice.

METHODS Design A cross-sectional design was applied. The instrument was ad- ministered by a trained interviewer via telephone. Telephone interviews were conducted from January to June 2011 using a purposeful sample of Canadian health policymakers represent- ing policy analysts and the senior managers.

Sampling and Procedures To identify potentially eligible participants, we adapted the pro- cedures used by Wathen et al. (2009). A matrix was created to generate lists of potential policy analysts and senior decision makers in different health ministries in each province. The information was obtained from publicly available online or- ganizational charts and staff directories. Research assistants initially contacted potential participants by e-mail to determine eligibility. Participants were deemed eligible if they: (a) worked in a health ministry policy unit or department or a related unit or department involved with health program or policy devel- opment, implementation, or analysis; (b) were employed in a role (e.g., policy analyst, consultant, epidemiologist) respon- sible for reviewing and using evidence to prepare reports for senior management, cabinet submissions to fund new pro- grams or policies, or summaries and analyses to guide policy decision making; (c) assumed the role of a senior manager or director responsible for a unit or department and the hir- ing of staff who would use evidence as part of their day-to-day functions; and (d) were able to commit to a 1-hour telephone interview. Individuals who were not eligible were asked to rec- ommend other persons from their department who might meet the study criteria. Eligible participants were e-mailed a study invitation, consent form, and a copy of the Self-assessment for Organizational Capacity Instrument; they could use the copy to follow through during the telephone interview to facilitate the administration of the instrument. Research assistants con- tacted the respondents by e-mail or telephone to schedule an

interview time. One interviewer (CC) conducted all interviews. The interviewer used a standardized scripted version of the survey and read each question to the participant and recorded all responses. For any potentially eligible policymaker who did not respond to the original e-mail invitation, a multimethod approach involving e-mail and telephone contacts was followed based on Dillman’s original Total Design method (Dillman, 1978).

In total, 57 participants met eligibility criteria, enrolled, and completed the instrument. The obtained sample size was ad- equate for assessing the internal consistency of each subscale (containing four to six items) of the Self-assessment for Orga- nizational Capacity Instrument, applying the recommendation of having 5–10 per item (Streiner & Norman, 2003), and for de- tecting differences in groups’ means of a medium size, setting α at .05 and β at .80 (Cohen, 1992).

INSTRUMENT The Self-assessment for Organizational Capacity Instrument contains seven subscales with items describing the organiza- tion’s: (a) culture and values supporting use of research evi- dence to inform decisions, (b) setting of priorities for obtain- ing research evidence, (c) ability to acquire research evidence to inform decisions, (d) capacity to assess quality and applica- bility of the research evidence and to interpret the results so that they inform priority decisions, (e) use of research evidence to inform recommendations and decisions, (f) monitoring and evaluation of policies and programs, and (f) continuing profes- sional development on evidence-based topics. The instrument uses a 6-point Likert scale with responses ranging from don’t know (0) to strongly agree (5). The instrument contains 36 items for a total possible score ranging from 0 to 180. The number of items per subscale varies from 4 to 6, and the total subscale scores range between 20 and 30.

Analysis Descriptive statistics were conducted to examine the distribu- tion, measures of central tendency, and dispersion for individ- ual items, subscale scores, and total scale scores, as well as the participants’ characteristics. Internal consistency reliability of the instrument’s subscales was assessed using the Cronbach’s α coefficient and item-to-total correlation. Examination of in- teritem correlation coefficients assisted in the identification of item redundancy. For a newly developed instrument, the sub- scales are considered reliable if the Cronbach’s α coefficient is >.70 and the item-to-total correlations are >.30 (de Vaus, 2002; Streiner & Norman, 2003). The correlations among the items within a subscale should range between .30 and .80; cor- relations greater than .80 imply item redundancy (Streiner & Norman, 2003).

The known group comparison was applied to explore the discriminant validity of the instrument subscales. The mean subscale scores were compared for policy analysts and senior managers using the independent sample t test. Statistically

Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. 37 C© 2013 Sigma Theta Tau International

Evidence-Informed Health Policy Instrument Assessment

Table 1. Demographic Characteristics of Sample (N = 57)

Civil Servant

Senior Manager

Total

Characteristics (n= 36) (n= 21) (N= 57) Role in organization

Data analysis 10 1 11

Advice to senior management 4 2 6

Policy analysis 19 0 19

Management of staff 1 17 18

Administration of programs 2 1 3

Years in current position

≤1 year 9 0 9 1–5 years 18 18 36

6–10 years 6 3 9

>10 years 3 0 3

Education

Bachelor’s degree 10 4 14

Master’s degree 21 15 36

Doctorate 5 2 7

significant differences provided evidence of validity. In addi- tion, the effect size was computed as the standardized groups’ mean difference to examine the clinical meaningfulness of the difference between groups.

Ethics This study was approved by the research ethics board at Ryer- son University, Toronto, Canada. Eligible participants received a study information form and provided written, informed consent.

FINDINGS Demographics The sample consisted of 57 policymakers across 9 provinces; 36 participants worked in a policy analyst role and 21 as a senior manager or director. Table 1 outlines the main demo- graphic characteristics of participants. Most respondents were employed from 1 to 5 years in their current position and had obtained a graduate degree. There were no missing data, as participants responded to all items on the scale.

Descriptive The mean (SD) score for each item and subscale are presented in Table 2. For the subscales with six items, the scores obtained

ranged from 0 to 30 with a mid-point value of 18. Among the subscales with six items, the highest score was found for “organizational culture and values that supports the use of re- search evidence to inform decisions” (mean = 22.81; SD = 5.38) suggesting that on average, participants in their organization agreed that their organizational culture and values support the use of research evidence to inform decisions. Subscales with five items had scores that ranged from 0 to 25 with a mid-point value of 15. Among these subscales, the highest score was found for the organization’s “ability to obtain research evidence to in- form decisions” (mean = 18.86; SD = 3.49) suggesting that on average, participants agreed that the organization is able to obtain research evidence for decision making. Finally for sub- scales with four items, the scores ranged from 0 to 20 with a mid-point of 12. The only question with four items was the or- ganization’s ability to “assess the quality and applicability of the research evidence and to interpret the results to inform priority decisions;” its mean score was 2.74 (SD = 3.99), which sug- gested that participants agreed slightly that their organization can carry out this function.

Internal Consistency Reliability The item-to-total correlations for items within each subscale are found in Table 2. In general, they reflected positive and mod- erate associations between the items’ scores and the combined score on the remaining items comprising the respective sub- scales. Two items had a correlation coefficient less than the .30 criterion. These items were: the organization’s access to pub- lications and to databases such as PubMed and the Cochrane Library (item-to-total correlation = .16); and the organization’s staff has enough time for continuing professional development (item-to-total correlation = .26). A possible reason for the low item-to-total correlation is the low variability of scores across participants (most selected agree or strongly agree). Overall, the interitem correlations for items within their respective sub- scales were positive and of a moderate to high magnitude. The range of these correlation coefficients was identified for each subscale: organizational culture and values to support research evidence use (.33–.78); setting priorities to obtain re- search evidence (.50–.88); acquiring research evidence to in- form decisions (.32–.58); ability to assess quality and applica- bility of the research evidence and to interpret the results to inform priority decisions (.39–.75); use of evidence to inform decisions (.35–.66); monitoring and evaluation of policies and programs (.54–.85); and professional development on evidence- based topics (.36–.57). Three items had interitem correlations >.80 suggesting potential redundancy. To further determine redundancy, the Cronbach’s α coefficient for the respective sub- scales was computed with the redundant items included and then excluded from the analysis. When all of the items were included in the analysis, the subscales showed acceptable in- ternal consistency reliability. The Cronbach’s α coefficient was .81 for organizational culture and values to support research evidence use; .91 for setting priorities to obtain research evi- dence; .67 for obtaining research evidence to inform decisions;

38 Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. C© 2013 Sigma Theta Tau International

Original Article

Ta b

le 2 .D

es cr

ip tiv

e St

at is

tic s

(M ea

n, St

an d

ar d

D ev

ia tio

n, an

d Ite

m -T

ot al

C or

re la

tio n)

fo r

Ea ch

Su b

sc al

e of

th e

In st

ru m

en t

(N =

57 )

St an da rd

Co rr ec te d Ite m -t o

Su m m ar y of In st ru m en tS ub sc al e Ite m s

M ea n

De vi at io n

To ta lC or re la tio n

O rg an iz at io na lc ul tu re an d va lu es su pp or tt he us e of re se ar ch

ev id en ce to in fo rm de ci si on s (s ix ite m s)

(a )O ur m is si on or ot he rk ey or ga ni za tio na ld oc um en ts su pp or t

ev id en ce -in fo rm ed de ci si on s

3. 81

1.2 8

.4 9

To ta lp oi nt s ac ro ss ite m s = 30

To ta ls ca le sc or es :M ea n

= 22 .8 1, SD

= 5. 38

(b )L ea de rs hi p in th e or ga ni za tio n su pp or ts ev id en ce -in fo rm ed

de ci si on s

4. 07

1.1 4

.5 8

(c )W e ar e ac tiv e m em be rs in ne tw or ks th at su pp or t

ev id en ce -in fo rm ed po lic ym ak in g or ac tiv el y fo llo w th e

de ve lo pm en ts an d th e pr od uc ts of re le va nt ne tw or ks

3. 68

1.3 6

.3 1

(d )W e ha ve re gu la rm ee tin gs w he re hi gh ly re le va nt re se ar ch

ev id en ce is di sc us se d in re la tio ns hi p to de ci si on s

3. 37

1.1 7

.5 5

(e )O ur or ga ni za tio n ha s co m m itt ed re so ur ce s to en su re th at

re se ar ch ev id en ce is us ed to in fo rm de ci si on s

3. 75

1.4 3

.4 4

(f )O ve ra ll, ou ro rg an iz at io na lc ul tu re an d va lu es su pp or tt he us e of

re se ar ch ev id en ce to in fo rm de ci si on s

4. 12

.9 8

.6 7

O rg an iz at io n’ s ab ili ty to se tp rio rit ie s fo ro bt ai ni ng re se ar ch

ev id en ce (fi ve ite m s)

(a )W e ha ve ex pl ic it cr ite ria fo rs et tin g pr io rit ie s fo ro bt ai ni ng

re se ar ch ev id en ce

2. 56

1.3 2

.5 9

To ta lp oi nt s ac ro ss ite m s = 25

To ta ls ca le sc or es :M ea n

= 14 .6 7, SD

= 5. 86

(b )A n ap pr op ria te m ix of pe op le w ith re le va nt ty pe s of ex pe rt is e,

re sp on si bi lit ie s, an d in te re st s m ak e de ci si on s ab ou tp rio rit ie s fo r

ob ta in in g re se ar ch

3. 53

1.2 1

.6 9

(c )W e ha ve an ap pr op ria te pr oc es s fo rs et tin g pr io rit ie s fo r

ob ta in in g re se ar ch ev id en ce dy na m ic al ly

2. 81

1.4 4

.6 2

(d )W e ha ve ap pr op ria te pr io rit ie s fo ro bt ai ni ng re se ar ch ev id en ce

2. 86

1.4 2

.5 2

(e )O ve ra ll, ou ro rg an iz at io n do es a go od jo b of se tti ng pr io rit ie s fo r

ob ta in in g re se ar ch ev id en ce to in fo rm de ci si on s

2. 91

1.4 0

.6 2

O rg an iz at io n’ s ab ili ty to ob ta in re se ar ch ev id en ce to in fo rm

de ci si on s (fi ve ite m s)

(a )W e ha ve sk ill ed st af ft o se ar ch fo ra nd re tr ie ve re se ar ch

ev id en ce

4. 09

.9 6

.4 0

To ta lp oi nt s ac ro ss ite m s = 25

To ta ls ca le sc or es :M ea n

= 18 .8 6, SD

= 3. 49

(b )O ur st af fh av e en ou gh tim e, in ce nt iv e an d re so ur ce s or

ar ra ng em en ts w ith ex te rn al ex pe rt s to fin d an d ob ta in re se ar ch

ev id en ce

3. 00

1.2 1

.4 3

Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. 39 C© 2013 Sigma Theta Tau International

Evidence-Informed Health Policy Instrument Assessment

Ta b

le 2 .C

on tin

ue d

St an da rd

Co rr ec te d ite m -t o

Su m m ar y of in st ru m en ts ub sc al e ite m s

M ea n

de vi at io n

to ta lc or re la tio n

(c )W e ha ve go od ac ce ss to da ta ba se s su ch as Pu bM ed an d Th e

Co ch ra ne Li br ar y an d pu bl ic at io ns th at re po rt re le va nt re se ar ch

4. 14

1.1 5

.16

(d )W e ha ve go od ac ce ss to na tio na l, pr ov in ci al or lo ca le vi de nc e

th at w e ne ed to in fo rm de ci si on s (e .g ., ro ut in el y co lle ct ed da ta ,

su rv ey s, on e- of fs tu di es )

3. 79

1.0 9

.4 3

(e )O ve ra ll, ou ro rg an iz at io n do es a go od jo b of ob ta in in g re se ar ch

ev id en ce to in fo rm pr io rit y de ci si on s

3. 84

.8 1

.6 2

O rg an iz at io n’ s ab ili ty to as se ss th e qu al ity an d ap pl ic ab ili ty of th e

re se ar ch ev id en ce an d to in te rp re tt he re su lts to in fo rm pr io rit y (a )W e ha ve sk ill ed st af ft o ev al ua te th e qu al ity an d ap pl ic ab ili ty of

re se ar ch ev id en ce an d in te rp re tt he re su lts

3. 58

1.1 7

.5 6

de ci si on s (f ou ri te m s)

To ta lp oi nt s ac ro ss ite m s = 20

To ta lS ca le Sc or es :M ea n

= 12 .7 4, SD

= 3. 99

(b )O ur st af fh av e en ou gh tim e, in ce nt iv e, an d re so ur ce s to

ev al ua te th e qu al ity an d ap pl ic ab ili ty of re se ar ch ev id en ce an d

in te rp re tt he re su lts

2. 65

1.0 4

.6 8

(c )W e ha ve ar ra ng em en ts w ith ex te rn al ex pe rt s to ev al ua te th e

qu al ity an d ap pl ic ab ili ty of re se ar ch ev id en ce an d in te rp re tt he

re su lts

3. 12

1.4 6

.5 2

(d )O ve ra ll, ou ro rg an iz at io n do es a go od jo b of as se ss in g th e

qu al ity an d ap pl ic ab ili ty of re se ar ch ev id en ce an d in te rp re tin g

th e re su lts to in fo rm pr io rit y de ci si on s

3. 39

1.1 9

.6 5

O rg an iz at io n’ s us e of re se ar ch ev id en ce to in fo rm

re co m m en da tio ns an d de ci si on s (fi ve ite m s)

To ta lp oi nt s ac ro ss ite m s = 25

(a )O ur st af fh av e su ffi ci en tt im e, ex pe rt is e, an d in ce nt iv e to en su re

ap pr op ria te us e of re se ar ch ev id en ce to in fo rm

re co m m en da tio ns an d de ci si on s

3. 18

1.0 8

.4 9

To ta ls ca le sc or es :M ea n

= 17 .3 9, SD

= 4. 06

(b )S ta ff an d ap pr op ria te st ak eh ol de rs kn ow ho w an d w he n th ey

ca n co nt rib ut e re se ar ch ev id en ce to in fo rm de ci si on s an d ho w

th at in fo rm at io n w ill be us ed

3. 21

1.6 1

.7 4

(c )O ur or ga ni za tio n en su re s th at ap pr op ria te st ak eh ol de rs ar e

in vo lv ed in de ci si on m ak in g an d th at th ey ha ve ac ce ss to

re le va nt re se ar ch ev id en ce

3. 68

1.0 7

.4 4

(d )W ha te vi de nc e w as us ed an d ho w it w as us ed is tr an sp ar en ti n

ou rd ec is io ns

3. 68

1.1 0

.5 4

40 Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. C© 2013 Sigma Theta Tau International

Original Article

Ta b

le 2 .C

on tin

ue d

St an da rd

Co rr ec te d ite m -t o

Su m m ar y of in st ru m en ts ub sc al e ite m s

M ea n

de vi at io n

to ta lc or re la tio n

(e )O ve ra ll, ou ro rg an iz at io n do es a go od jo b of us in g re se ar ch

ev id en ce to in fo rm re co m m en da tio ns an d de ci si on s

3. 63

1.0 8

.6 2

O rg an iz at io n’ s ab ili ty to m on ito ra nd ev al ua te po lic ie s an d

(a )W e ro ut in el y co ns id er th e ne ed fo rm on ito rin g an d ev al ua tio n

3. 84

1.1 9

.4 4

pr og ra m s (fi ve ite m s)

To ta lp oi nt s ac ro ss ite m s = 25

(b )O ur st af fh av e en ou gh ex pe rt is e or ad eq ua te ar ra ng em en ts w ith

ex te rn al ex pe rt s fo rm on ito rin g an d ev al ua tio n

3. 21

1.2 6

.4 9

To ta ls ca le sc or es :M ea n

= 16 .6 1, SD

= 5. 60

(c )O ur st af fh av e th e in ce nt iv e an d re so ur ce s to co nd uc to r

co m m is si on m on ito rin g an d ev al ua tio n

3. 09

1.2 9

.5 4

(d )O ur or ga ni za tio n en su re s th at ap pr op ria te st ak eh ol de rs ar e

in vo lv ed in de ci si on s ab ou tm on ito rin g an d ev al ua tio n

3. 32

1.3 5

.5 4

(e )O ve ra ll, ou ro rg an iz at io n do es a go od jo b of m on ito rin g an d

ev al ua tio n of po lic ie s an d pr og ra m s

3. 16

1.3 6

.6 1

O rg an iz at io n’ s ab ili ty to su pp or tc on tin ui ng pr of es si on al

de ve lo pm en tt ha ta dd re ss es ev id en ce -b as ed to pi cs (s ix ite m s)

(a )O ur st af fh av e en ou gh tim e fo rc on tin ui ng pr of es si on al

de ve lo pm en t

3. 53

1.0 3

.2 6

To ta lp oi nt s ac ro ss ite m s = 30

To ta ls ca le sc or es :M ea n

= 17 .6 8, SD

= 5. 24

(b )W e ha ve ro ut in es to en su re th at ou rs ta ff co nt in ue to de ve lo p

ap pr op ria te sk ill s fo ro bt ai ni ng ,a pp ra is in g, an d ap pl yi ng

re se ar ch ev id en ce

2. 82

1.2 5

.5 6

(c )O ur st af fp rio rit iz e co nt in ui ng pr of es si on al de ve lo pm en t

ac tiv iti es th at ar e “e vi de nc e- ba se d” (i. e. ,w ith co nt en tt ha ti s

ba se d on re se ar ch ev id en ce an d us in g co nt in ui ng pr of es si on al

de ve lo pm en tm et ho ds th at ar e ba se d on re se ar ch ev id en ce )

2. 77

1.5 3

.4 5

(d )W e ha ve ap pr op ria te ro ut in es fo rp rio rit iz in g in te rn al

pr of es si on al co nt in ui ng de ve lo pm en ta ct iv iti es th at

ac co m m od at e th e ne ed s of bo th ne w an d lo ng -t er m st af f

2. 86

1.3 1

.4 2

(e )W e ha ve ap pr op ria te ro ut in es fo rd ec id in g w he th er to su pp or t

pa rt ic ip at io n in ex te rn al co nt in ui ng pr of es si on al de ve lo pm en t

ac tiv iti es th at ac co m m od at e th e ne ed s of bo th ne w an d

lo ng -t er m st af f

2. 61

1.3 6

.5 0

(f )O ve ra ll, ou ro rg an iz at io n do es a go od jo b of su pp or tin g

co nt in ui ng pr of es si on al de ve lo pm en tt ha ta dd re ss es im po rt an t

to pi cs an d is ev id en ce -b as ed

3. 09

1.2 1

.5 4

Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. 41 C© 2013 Sigma Theta Tau International

Evidence-Informed Health Policy Instrument Assessment

.82 for ability to assess quality and applicability of the research evidence and to interpret the results to inform priority deci- sions; .78 for use of evidence to inform decisions; .91 for mon- itoring and evaluation of policies and programs; and .76 for professional development on evidence-based topics. When po- tentially redundant items (i.e., those which had high interitem correlations) were removed from the respective subscales, the value of the Cronbach’s α coefficient decreased slightly to .80 for “setting priorities to obtain research evidence,” .78 for “monitoring and evaluation of policies and programs,” and .60 for the “organization’s ability to assess quality and appli- cability of the research evidence and to interpret the results to inform priority decisions” subscales. The reduced values of the Cronbach’s α coefficient did not support exclusion of re- dundant items. Further, omitting the items would decrease the number of items comprising the respective subscales, which could pose challenges in how well the subscale operational- izes the aspect of organizational capacity for research evidence use. Accordingly, the items were retained until their perfor- mance can be evaluated in future research with larger sample sizes.

Discriminant Validity Table 3 presents the results of the independent sample t test and the effect sizes comparing the responses of policy ana- lysts and senior managers. A statistically significant difference was found for the Organization’s Ability to Monitor and Eval- uate Policies and Programs subscale. Policy analysts (mean = 16.83, SD = 5.74) had a lower mean score on this subscale than senior managers (mean = 18.57, SD = 3.29). Effect sizes ranged from −.10 to .16, indicating a small difference in the subscales’ mean scores for the two groups. Unsolicited com- ments made by participants during the telephone interviews provided some explanation for these findings. A few senior managers commented that they were not responsible for ac- cessing or critiquing evidence; rather, they indicated that these activities fell under the skill set and responsibility of the pol- icy analysts. Some policy analysts commented that they were comfortable describing the culture and process for evidence use in their department, but struggled to describe whether or not their observations were similar at the broader organiza- tional level as they were not involved in higher level manage- ment discussion. Overall, the results did not show significant differences in responses between senior managers and policy analysts.

DISCUSSION This study offers the first examination of the psychometric properties of the Self-assessment for Organizational Capacity Instrument. Evidence-informed decision making is an impor- tant issue for consideration among health policymakers and we address a current gap by evaluating an instrument that assesses capacity for evidence use among health policymak- ers. By comparing policy analysts with senior managers, we Ta

b le

3 .D

is cr

im in

an t

V al

id ity

Re su

lts U

si ng

In d

ep en

d en

t Sa

m p

le t-

Te st

C om

p ar

in g

Po lic

y A

na ly

st s

an d

Se ni

or M

an ag

er s

an d

Ef fe

ct Si

ze

Po lic y

Se ni or

In de pe nd en t

In de pe nd en t

Ef fe ct

In st ru m en t

An al ys t

M an ag er

t- Te st t

t- Te st Si g.

Si ze

Su bs ca le

M ea n (S D)

M ea n (S D)

Va lu e (d f)

(2 -T ai le d)

Co he n’ s d

O rg an iz at io na lc ul tu re an d va lu es su pp or ts th e us e of re se ar ch ev id en ce to in fo rm de ci si on s (s ix ite m s)

22 .16 (5 .6 0)

23 .9 0 (4 .9 0)

− 1.1 8 (5 5)

.2 4

− .3 2

O rg an iz at io n’ s ab ili ty to se tp rio rit ie s fo ro bt ai ni ng re se ar ch ev id en ce (fi ve ite m s)

14 .2 7 (6 .4 2)

15 .3 3 (4 .8 3)

− .6 5 (5 5)

.5 1

− .17

O rg an iz at io n’ s ab ili ty to ob ta in re se ar ch ev id en ce to in fo rm de ci si on s (fi ve ite m s)

18 .7 2 (3 .9 6)

19 .0 9 (2 .5 4)

− .3 8 (5 5)

.7 0

− .10

O rg an iz at io n’ s ab ili ty to as se ss th e qu al ity an d ap pl ic ab ili ty of th e re se ar ch ev id en ce an d to in te rp re tt he

re su lts to in fo rm pr io rit y de ci si on s (f ou ri te m s)

12 .9 4 (3 .9 4)

12 .3 8 (4 .12 )

.5 1( 55 )

.6 1

.14

O rg an iz at io n’ s us e of re se ar ch ev id en ce to in fo rm re co m m en da tio ns an d de ci si on s (fi ve ite m s)

17 .6 3 (4 .2 2)

16 .9 5 (3 .8 1)

.6 1( 55 )

.5 4

.16

O rg an iz at io n’ s ab ili ty to m on ito ra nd ev al ua te po lic ie s an d pr og ra m s (fi ve ite m s)

15 .4 7 (6 .3 5)

18 .5 7 (3 .2 9)

− 2. 42 (5 4. 45 )

.0 1a

− .5 6

O rg an iz at io n’ s ab ili ty to su pp or tc on tin ui ng pr of es si on al de ve lo pm en tt ha ta dd re ss es ev id en ce -b as ed to pi cs

(s ix ite m s)

16 .8 3 (5 .7 4)

19 .14 (3 .9 4)

− 1.6 2 (5 5)

.10 − .4 4

a I nd ic at es st at is tic al ly si gn ifi ca nt .

42 Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. C© 2013 Sigma Theta Tau International

Original Article sought to better understand the evidence needs that inform policy decisions between these two groups. Due to the com- plex nature of the policy environment, these results suggested that both policy analysts and senior managers may not only have different needs for information, but also varied ability to critique and examine the quality of the evidence. Although fur- ther study is needed, these results suggest that it may be more important for the policy analyst to have a strong understanding of how to search for and critically examine research to distin- guish good quality evidence from poor quality so that it can be better used to support policy decisions. However, what the policy analyst may struggle with is the application of research results to policy issues, including how the evidence impacts or serves the needs of the larger organization—the government. Senior managers, as opposed to policy analysts, are more likely to be familiar with the perspective of the organization, and may be an important link to integrating the evidence at a broader systems level. Hence, it may not be as important for senior managers working in government to know how to critically appraise a research article, but rather it is important for them to possess the skills which enable them to consider the im- pact of the evidence in relation to the larger workings of the government, a realm to which the policy analyst may not be exposed.

Limitations of This Study Despite its potential to inform evidence-informed decision making, this study has limitations that warrant future research. Limitations of this study include the small pool of potentially eligible participants and the apparent similarity in their expe- riences and perceptions, which resulted in restricted range in their responses. Due to the sample size, more sophisticated psychometric analysis, such as factor analysis, was not possi- ble. One challenge that we encountered was that participants had difficulty securing time to participate in the study due to their rapidly changing work commitments. If an urgent is- sue came up for a senior manager, it would not always be possible to carry out a survey at the scheduled time. As a result, researchers need to be flexible when scheduling in- terview times with those in government environments. De- spite these issues, our sample size was adequate to carry out this preliminary assessment of the instrument’s psychomet- ric properties. Ongoing evidence of reliability and validity us- ing a larger and more diverse sample of policymakers is re- quired so that more complex psychometric evaluation can be performed.

The study results also contribute to an emergent under- standing of how organizations support research evidence use to inform policy decision making. The findings provide initial evi- dence of the instrument reliability and validity. The items com- prising each subscale were internally consistent; they measure the respective aspects of organizational capacity for research ev- idence use with minimal error. An area for ongoing evaluation is the assessment of potential item redundancy. Three items were found to be redundant but were retained because remov-

ing them reduced the value of the Cronbach’s α coefficient and the number of items comprising the subscales. Whether or not these items should be retained in the instrument requires additional investigation using larger, more varied samples of policymakers at the policy analyst and senior management levels.

Results of the known group comparison showed a statis- tically significant difference in the policy analysts’ and senior managers’ perceptions of how well the government organiza- tion evaluates and monitors policies. However, no other sub- scale’s scores differed between policy analysts and managers. Furthermore, the magnitude of the differences between the two groups of participants was small. These findings did not support the hypotheses, and may be related to the characteris- tics of the instrument or the sample. The content of the instru- ment may not be sensitive enough to discriminate responses of senior managers and policy analysts. The sample represented senior managers and policy analysts working in government or related organizations; these institutions may have comparable capacity for evidence use or may ascribe similar responsibilities to the two groups of participants.

Using an instrument to assess organizational capacity for evidence use among health policymakers can have unique chal- lenges. One challenge includes the potential for bias among respondents. Because government is often scrutinized by the media and must remain accountable to the public, those who work in government may not be able to answer the instrument questions without apprehension. While for this study, respon- dents felt that they could answer truthfully, they did express some uncertainty in responding for the entire organization. These preliminary findings warrant further study. During the telephone survey, some participants expressed concerns that they did not want to make the “government look bad,” and debated whether to select a more favorable response. Policy analysts, who saw themselves as the “frontline” workers of pol- icy development, had a good understanding of the operations and activities within their departments, but sometimes found it difficult to answer instrument items when considering the operations of the entire ministry. These respondents indicated that senior managers were better able to answer on behalf of the organization as a whole because they were involved in higher level decision making than the policy analysts. This finding is supported by Birken, Lee, and Weiner (2012), who found that managers in health care organizations oversee and have influence over implementation of innovations such as those that support research use within the organization. Likewise, senior managers have influence on frontline workers, and a lack of support for an innovation, such as evidence use, could impact whether or not it was carried out (Birken et al., 2012). These issues have implications for the instrument that po- larized responses can introduce error and impact the overall psychometric assessment of the instrument. Additional stud- ies with a larger and more varied sample can help evaluate the overall integrity of the items and assess whether any additional changes to the instrument need to be made.

Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. 43 C© 2013 Sigma Theta Tau International

Evidence-Informed Health Policy Instrument Assessment

IMPLICATIONS FOR RESEARCH AND POLICY The Self-assessment for Organizational Capacity Instrument is relevant for policymaking organizations, including health and related ministries, to use as a way to assess their organiza- tional ability to access, appraise, and utilize research evidence to inform decisions. With increasing government emphasis on demonstrating accountability for decisions, funding alloca- tions, and the distribution of services, the need to explore how well health ministries (and government as a whole) support evidence use is an important consideration for demonstrating accountability. More research is needed to better understand the organizational capacity for evidence use across health and related ministries. From this study, a next phase of research could examine the types of resources that health policy envi- ronments need to support evidence-informed decision making, and the facilitators and barriers to evidence use in policy envi- ronments. By identifying areas of strength and for continued efforts, health ministries can better understand how research use occurs within the organization, and establish supports to facilitate improved evidence-informed health policy. This study provides a first look at the Self-assessment for Organizational Capacity Instrument’s psychometric properties and demon- strates that this instrument can be useful when evaluating gov- ernment and other organizations’ use of evidence to inform decision making. Due to our sample size, we must consider these results cautiously and remember that ongoing testing of this instrument is needed. Future psychometric evaluation of this promising instrument requires a larger and varied sample of policymakers from health and related ministries. The sample should also include both policy analysts and senior managers across varied policymaking contexts so that a robust assess- ment can be made for how well this instrument distinguishes between different groups involved in policy decision making. Factor analysis should also be incorporated into the psychome- tric evaluation of this instrument to identify item redundancy and consistency across the measure.

CONCLUSIONS The Self-assessment for Organizational Capacity Instrument demonstrates acceptable internal consistency and discrimi- nant validity in this preliminary assessment of its psychome- tric properties. Use of this instrument can be a starting point for government and related organizations to better understand how well it supports evidence use, including its acquisition, appraisal, and use in health policy decision making. WVN

LINKING EVIDENCE TO ACTION

• Policy statements need to be based on high- quality research evidence.

• The Self-Assessment for Organizational Capacity Instrument for Evidence-Informed Health Policy is a promising scale that can aid organizations in their assessment of capacity to use research evidence.

• Further research with this tool that includes larger samples is important to further establish its validity and reliability.

Author information

Cristina Catallo, Associate Professor, Daphne Cockwell School of Nursing, Ryerson University, Toronto, Ontario, Canada; Souraya Sidani, Professor, Daphne Cockwell School of Nurs- ing, Ryerson University, Toronto, Ontario, Canada; Canada Research Chair in Design and Evaluation of Health Interven- tions. Address correspondence to Dr. Cristina Catallo, Daphne Cockwell School of Nursing, Ryerson University, 350 Victo- ria Street, POD 458B, Toronto, Ontario, Canada M5B 2K3; ccatallo@ryerson.ca

Accepted 17 July 2013 Copyright C© 2013, Sigma Theta Tau International

References Beyer, J. M., & Trice, H. M. (1982). The utilization process: A

conceptual framework and synthesis of empirical findings. Ad- ministrative Science Quarterly, 27(4), 591–622.

Birken, S. A., Lee, S. D., & Weiner, B. J. (2012). Uncovering middle managers’ role in healthcare innovation implementation. Imple- mentation Science, 7(28). doi:10.1186/1748-5908-7-28.

Canadian Foundation for Healthcare Improvement (formerly The Canadian Health Services Research Foundation ). (n.d.). Is research working for you? A self-assessment tool and discussion guide for health services management and policy organizations. Retrieved from: http://www.cfhi-fcass.ca/PublicationsAndResources/ ResourcesandTools/SelfAssessmentTool.aspx

Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.

de Vaus, D. (2002). Surveys in social research (5th ed., p.184), New South Wales, Australia: Routledge.

Dillman, D. (1978). Mail and telephone surveys: The total design method. Hoboken, NJ: John Wiley & Sons, Inc.

Graham, I. D., & Logan, J. (2004). Innovations in knowledge trans- fer and continuity of care. Canadian Journal of Nursing Research, 36(2), 89–103.

Hoffman, S. J., Lavis, J. N., & Bennett, S. (2009). The use of re- search evidence in two international organizations’ recommen- dations about health systems. Healthcare Policy, 5(1), 66–86.

Jewell, C. J., & Bero, L. A. (2008). Developing good taste in evidence: Facilitators of and hindrances to evidence-informed health policymaking in state government. The Milbank Quar- terly, 86(2), 177–208.

44 Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. C© 2013 Sigma Theta Tau International

Original Article Kitson, A., Harvey, G., & McCormack, B. (1998). Enabling

the implementation of evidence based practice: A con- ceptual framework. Quality in Health Care, 7, 149–158. doi:10.1136/qshc.7.3.149

Kothari, A., Edwards, N., Hamel, N., & Judd, M. (2009). Is research working for you? Validating a tool to examine the capacity of health organizations to use research. Implementation Science, 4, 46, 1–9. doi:10.1186/1748-5908-4-46.

Lavis, J. N. (2006). Research, public policymaking, and knowledge- translation processes: Canadian efforts to build bridges. The Jour- nal of Continuing Education in the Health Professions, 26(1), 37–45.

Lavis, J. N. (2009). How can we support the use of systematic reviews in policymaking? PLOS Medicine, 6(11): e1000141. doi: 10.1371/journal.pmed.1000141

Lavis, J. N., Lomas, J., Hamid, M., & Sewankambo, N. K. (2006). Assessing country level efforts to link research to action. Bulletin of the World Health Organization, 84, 620–628.

Lavis, J. N., Oxman, A. D., Lewin, S., & Fretheim, A. (2009). SUPPORT Tools for evidence-informed health policymaking (STP). Health Research Policy and Systems, 7(Suppl. 1), I1, 1–7. doi:10.1186/1478-4505-7-S1-I1.

Lavis, J. N., Oxman, A. D., Moynihan, R., & Paulsen, E. J. (2008). Evidence-informed health policy 1: Synthesis of findings from a multi-method study of organizations that support the use of research evidence. Implementation Science, 3, 53, 1–7. doi: 10.1186/1748-5908-3-53

National Collaborating Centre for Methods and Tools. (2009). The Registry of methods and tools. Retrieved from: http://www.nccmt.ca/registry/index-eng.html

Oxman, A. D., Lavis, J. N., & Fretheim, A. (2007). The use of evidence in WHO recommendations. Lancet, 369, 1883–1889.

Oxman, A. D., Vandvik, P. O., Lavis, J. N., Fretheim, A., & Lewin, S. (2009). SUPPORT Tools for evidence-informed health poli- cymaking (STP) 2: Improving how your organisation supports the use of research evidence to inform policymaking. Health Re- search Policy and Systems, 7(Suppl. 1), S2, 1–10. doi:10.1186/1478- 4505-7-S1-S2

Peirson, L., Catallo, C., & Chera, S. (2013). The registry of knowledge translation methods and tools: A resource to support evidence-informed public health. International Jour- nal of Public Health, 58(4), 493–500. doi:10.1007/s00038-013- 0448-3

Streiner, D. L., & Norman, G. R. (2003). Health measurement scales: A practical guide to their development and use (3rd ed.). New York, NY: Oxford University Press.

Thornhill, J., Judd, M., & Clements, D. (2009). CHSRF knowledge transfer: (Re)introducing the Self-Assessment Tool that is help- ing decision-makers assess their organization’s capacity to use research. Healthcare Quarterly, 12(1), 22–24.

Wathen, C. N., Tanaka, M., Catallo, C., Lebne, A. C., Friedman, M. K., Hanson, M. D., . . . McMaster IPV Education Research Team. (2009). Are clinicians being prepared to care for abused women? A survey of helath professional education in Ontario, Canada. BMC Medical Education, 9, 34, 1–11. doi:10.1186/1472- 6920-9-34

World Health Organization. (2004). The Mexico statement on health research: knowledge for better health: Strengthening health systems. Geneva, Switzerland: Author.

doi 10.1111/wvn.12018 WVN 2014;11:35–45

Worldviews on Evidence-Based Nursing, 2014; 11:1, 35–45. 45 C© 2013 Sigma Theta Tau International

Copyright of Worldviews on Evidence-Based Nursing is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.