Skip to main content

Institutional capacity for health systems research in East and Central African schools of public health: experiences with a capacity assessment tool

Abstract

Background

Despite significant investments in health systems research (HSR) capacity development, there is a dearth of information regarding how to assess HSR capacity. An alliance of schools of public health (SPHs) in East and Central Africa developed a tool for the self-assessment of HSR capacity with the aim of producing institutional capacity development plans.

Methods

Between June and November 2011, seven SPHs across the Democratic Republic of Congo, Ethiopia, Kenya, Rwanda, Tanzania, and Uganda implemented this co-created tool. The objectives of the institutional assessments were to assess existing capacities for HSR and to develop capacity development plans to address prioritized gaps. A mixed-method approach was employed consisting of document analysis, self-assessment questionnaires, in-depth interviews, and institutional dialogues aimed at capturing individual perceptions of institutional leadership, collective HSR skills, knowledge translation, and faculty incentives to engage in HSR. Implementation strategies for the capacity assessment varied across the SPHs. This paper reports findings from semi-structured interviews with focal persons from each SPH, to reflect on the process used at each SPH to execute the institutional assessments as well as the perceived strengths and weaknesses of the assessment process.

Results

The assessment tool was robust enough to be utilized in its entirety across all seven SPHs resulting in a thorough HSR capacity assessment and a capacity development plan for each SPH. Successful implementation of the capacity assessment exercises depended on four factors: (i) support from senior leadership and collaborators, (ii) a common understanding of HSR, (iii) adequate human and financial resources for the exercise, and (iv) availability of data. Methods of extracting information from the results of the assessments, however, were tailored to the unique objectives of each SPH.

Conclusions

This institutional HSR capacity assessment tool and the process for its utilization may be valuable for any SPH. The self-assessments, as well as interviews with external stakeholders, provided diverse sources of input and galvanized interest around HSR at multiple levels.

Peer Review reports

Background

Since the early 2000s there has been increasing recognition of and support for health systems research (HSR) as a means to strengthen health policies, programs, and practices in low- and middle-income countries [1, 2]. It is widely acknowledged, however, that one of the barriers to more effective generation and use of health systems evidence is the limited capacity for HSR in many countries, particularly in low-income countries [3, 4]. A lack of capacity for HSR may manifest itself in multiple ways, such as an insufficient number of skilled researchers to conduct multidisciplinary research, inadequate library or information systems, unsupportive organizational environments, or an absence of linkages to policymakers and decision-makers who form the principal audience for much HSR. Given this array of challenges, it can be difficult to know where to start in terms of building capacity for HSR.

Despite significant investments in research capacity development by agencies [5, 6] such as the Wellcome Trust, the Fogarty International Center, Canada’s International Development Research Centre (IDRC), and the Swedish International Development Cooperation Agency, among others, there is a strikingly limited literature on assessing research capacity [7]. Frameworks for assessing health research capacity strengthening exist; however, many of these have been designed to satisfy funder’s needs and few provide practical guidance for implementation of the frameworks [8]. Furthermore, systematic methods of assessing existing capacity as an input to identifying areas for strengthening were not available. Commonly employed approaches have only been used retrospectively [9] and, typically, have focused on easily measured quantitative indicators of research and training outputs, such as the number of people trained and number of articles published [10, 11], rather than less easily measured but equally important outcomes such as changes in organizational culture or attitudes. Furthermore, with only a few notable exceptions [10, 1214], very little of this literature has focused on HSR.

This paper reports on an initiative supported by the Future Health Systems consortium [15] through the Higher Education Alliance for Leadership Through Health (HEALTH), which is a consortium of seven schools of public health (SPHs) in East and Central Africa. The SPHs included are:

  1. a)

    Jimma University College of Public Health and Medical Sciences (CPHMS), Ethiopia;

  2. b)

    Kinshasa School of Public Health, (KSPH), Democratic Republic of the Congo (DRC);

  3. c)

    Makerere University College of Health Sciences School of Public Health (MakSPH), Uganda;

  4. d)

    Moi University School of Public Health (MUSOPH), Kenya;

  5. e)

    Muhimbili School of Public Health and Social Sciences (MUSPHSS), Tanzania;

  6. f)

    National University of Rwanda School of Public Health (NURSPH), Rwanda;

  7. g)

    University of Nairobi School of Public Health (SPHUoN), Kenya.

Brief details about each of the seven schools are listed in Table 1. Among the seven SPHs, only KSPH, DRC, had previously undergone an organizational assessment that included the identification of public health training needs and characterization of faculty capacity to teach public health [16]. However, the scope was rather wide, the evaluation was externally contracted, and the study did not include HSR. Similarly, in Uganda, a 2010 assessment encompassed the entire College of Heath Sciences and its contribution to improving health in the country [17]. Hence, the overarching aim of this initiative was to devise self-led organizational HSR capacity assessments that would result in internally cultivated HSR capacity development strategies for the HEALTH Alliance SPHs.

Table 1 Summary of each school of public health capacity self-assessment process

As the aim of the assessment was primarily to identify where capacity development investments in HSR would have the greatest impact, a narrow focus on outputs was inappropriate. Our review of the literature found only one other published paper that shared a similar objective, employing a capacity assessment tool that combined both quantitative and qualitative elements, to help strengthen and shape a capacity development intervention; however, the study was focused on a single training intervention and did not concern HSR [18]. Since the completion of the HEALTH Alliance exercise, there has been one other relevant publication [19]. Its thrust, however, is on the outcomes, not the process, of a capacity assessment for health policy and systems research and analysis.

Past initiatives in sub-Saharan Africa have focused primarily on identifying and enhancing individual competencies [2022], but the institutional environments within which individuals work are often key to retaining and empowering them [23, 24]. Our approach was to develop a capacity assessment process that could provide a systematic method for each of the SPHs to reflect on institutional strengths and weaknesses with respect to HSR. The capacity assessment tool also aimed to stimulate discussion and agreement on what strategies would be most effective to help develop HSR capacity. Note that the focus of the assessment was organizational capacity for HSR, although individual researcher capacity was also addressed.

The main objectives for the capacity assessment process were the following.

  • Through a process of self assessment, develop a clear picture of existing capacities for HSR at the HEALTH Alliance SPHs.

  • For each SPH, draft and build consensus around HSR capacity development strategies and work plans for implementing them.

  • Make an initial and rapid assessment of HSR priorities in the different countries involved in the HEALTH Alliance, in part as a means to strengthen links to the policymaking and decision-making community.

While the primary purpose of the assessment was not to provide a “baseline” against which changes in capacity could be assessed, some of the indicators captured may be feasible to use in this way. Findings from the self-assessments are reported separately [2527]. This paper reviews the design and application of the self-assessment tool, the resulting outcomes, and lessons learned through its implementation with the aim of informing others who seek to use similar tools.

The following section presents the self-assessment tool in more detail. We then briefly describe the methods used to review and reflect on experience with the self-assessment tool. The findings section presents (i) the actual capacity assessment process and how it varied across countries, (ii) the key outcomes of the assessment, and (iii) the factors that affected the assessment process.

Methods

The capacity assessment tool

The overall structure of the tool was derived primarily from an instrument that IDRC uses to assess the organizational capacity needs of its partner research organizations [28], but with questions adapted to focus specifically on HSR. Some specific questions were informed by a qualitative instrument that was used to assess health policy analysis institutes, including their capacity and sustainability [29], and by tools used by the Alliance for Health Policy and Systems Research [4]. The proposed implementation process and format of the tool drew upon processes recommended in a similar tool developed by the Canadian Health Service Research Foundation (CHSRF) that seeks to examine the capacity of organizations to acquire and apply research evidence [30]. The CHSRF tool combines individual self-assessments with discussions across the organization that aim to stimulate action to address identified weaknesses.

A first draft of the capacity assessment tool was developed by Johns Hopkins University (JHU) in early 2011. The Deans of the SPHs identified focal persons (FPs) to lead the initiative, and this group together with JHU colleagues met at a workshop in Uganda in June 2011, where the tool was revised and adapted to meet local needs. The tool consisted of three components described below, designed to provide both internal and external perspectives, as well as subjective and objective information on capacity. The majority of amendments made during the June 2011 workshop were in the order, phrasing, and clarification of the questions. The necessity for ethics approval was discussed across the FPs with a final conclusion that all SPHs will seek clearance from their institutional Ethics Review Committees. KSPH colleagues translated the tool into French for use in the DRC. The final version of the HSR capacity assessment tool is as a separate document (Additional file 1).

The final Capacity Assessment Tool consisted of the following components:

A: Self-assessment of existing capacity for health systems research. Faculty and staff who were currently or potentially engaged in HSR were asked to complete the subjective self-administered questionnaire that sought their opinions on HSR capacity at their organization. The questionnaire began by providing a definition of HSR and offering examples of studies we believe reflect HSR as well as those that do not. The instrument included questions on the external environment, organizational motivation and commitment to HSR, organizational capacity (including the relevant skills of individual faculty and staff), research support, and inter-institutional linkages. There were multiple items regarding capacity under each of these headings, and respondents used a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree) to indicate the extent to which they agreed with each statement. The data was captured on paper and transferred to electronic Microsoft Excel format. Average scores per item were calculated as described elsewhere [25]. Table 2 provides examples of statements that respondents rated.

Table 2 Illustrative questions from self-administered internal faculty survey

B and C: Profile of health systems research within the institution. The first form, completed by each FP or their team, captured objective data about the number of staff working in HSR, their relevant skills, and the type of work they conduct. The second involved interviews with key informants within senior management at the institution.

D: Identification of priority health systems research questions. Researchers at the SPH and key stakeholders within the country were consulted (in a “quick and dirty” exercise) to determine what the priority HSR questions are in the health systems field. In each country, the FP or their team conducted five to eight semi-structured interviews using a provided interview guide.

Once these three steps were completed, FPs and their teams conducted preliminary analyses on the results. They then sought to convene two workshops with key stakeholders from within their respective SPHs. The first of these workshops focused on discussing the emerging findings from the HSR capacity assessment, interpreting these findings, and reaching agreement on strengths and weaknesses in existing HSR capacity. The second workshop was intended to discuss and build the capacity development plan. JHU provided draft agendas for these workshops. The capacity assessment teams incorporated discussions from the first workshop into the capacity assessment report, and findings from the second workshop formed the basis of the capacity development plan. Table 3 provides an overview of the workshops held by each SPH.

Table 3 Summary of each school of public health’s institutional capacity assessment results dissemination strategies

The seven SPHs implemented the capacity assessment during the months that followed the June 2011 meeting. Ethics approval was sought at all sites except MUSPHSS, Tanzania, where it was deemed unnecessary by the Dean. The FPs along with JHU reconvened in December 2011 to share their results, reflect on the outcomes, and discuss priorities for capacity development plans. Each SPH created a final report and a presentation on the detailed capacity self-assessment results that were shared internally. A two-page brief of the results and priorities was produced for external audiences. Refer to Figure 1 for a flowchart of the assessment process and Table 1 for a summary of the respondents.

Figure 1
figure 1

Self-assessment process – an idealized step-by-step flowchart.

Analyzing the experience of applying the capacity assessment tool

The authors of this paper tracked the implementation of the assessment tool through participating in each of the international meetings, regular teleconferences where progress was shared, and responding to occasional emails from FPs. The authors also reviewed draft findings from the institutional reports.

In addition, semi-structured interviews with all seven of the FPs in the different SPHs were conducted between March and May 2012 in order to document the differences in implementation processes and outcomes. Interview questions addressed (i) the implementation process, (ii) factors that facilitated implementation, (iii) challenges in implementation, (iv) perspectives on outcomes, and (v) advice and further thoughts. In particular, questions focused on data collection steps, deviations from protocol, human and financial resources needed for the assessment, internal dissemination of workshop details, feedback on assessment from participants and senior leadership, assessment successes, assessment challenges and solutions, reflections on outcomes, and best practices to inform future assessments. Responses were captured verbatim and extensive notes were taken during the interviews, all of which were reviewed and verified for content by respondents. Two members of the study team together reviewed the notes using a grounded theory approach [31] to generate a list of recurring themes related to ‘factors’ – both positive and negative – that affected the capacity assessment process. Similarly, they created list of ‘outcomes’ for both expected as well as unanticipated; themes were reviewed, collated, and summarized.

Findings

Capacity assessment process

All SPHs except for NURSPH, Rwanda, used the same capacity assessment tool. NURSPH, Rwanda, had launched the assessment in advance of the June 2011 meeting in Uganda and therefore used a previous version of the tool. A common criticism was that the tool was too long; hence, during the implementation process, each SPH made minor amendments to the tool based on their local needs. Ethics approval was sought at all sites except MUSPHSS, Tanzania, where the Dean considered the activity to be a capacity building exercise rather than a research initiative.

Except for MUSPHSS, Tanzania, the self-assessment component was sent to respondents via email as well as in hard copy for them to fill out individually and return anonymously. Respondents were given between one week and one month to return their forms. The return rates varied greatly between schools and in all cases respondents had to be reminded several times. To address low response rates, the assessment team at MUSPHSS, Tanzania, invited all of the respondents to convene together. They completed the self-assessments individually, discussed them as a group, and sought final answers through consensus. Although the assessment was designed to be self-administered, four out of seven teams (MUSOPH (Kenya), KSPH (DRC), SPHUoN (Kenya), MUSPHSS (Tanzania)) resorted to interviewer administered assessments in instances where familiarity with the objective was low, time was constrained, help was sought, or language was a barrier.

In order to complete all sections of the protocol within the budgeted time, FPs recruited assistance, particularly for pursuing respondents, collecting the completed forms, facilitating self-assessments, travelling to other cities to speak to external partners, planning dissemination meetings, data analysis, and report writing. The assessment team was made up of 3 to 5 people depending on the size of the school. Teams were varied and comprised of faculty with experience in interviewing and research skills, post-graduate students under training, external partners interested in HSR, and, in some cases, translators and transcribers. A sound understanding of HSR was a prerequisite in all cases, to ensure that the assessment team could serve as a resource for respondents in the event that there were questions or clarifications during administration of the tool.

All SPHs convened at least one workshop with the intent to share the findings of the capacity assessment. Capacity development planning was absorbed into the first dissemination workshop for CPHMS, Ethiopia, and for MakSPH, Uganda. However, it was the primary topic for the second workshop for SPHUoN, Kenya, for MUSPHSS, Tanzania, and for KSPH, DRC. NURSPH, Rwanda, did not convene a second workshop. The guidelines for the result dissemination workshops were used without significant adaptation, although the time allocation to each section of the workshop was often reduced and the method of interaction varied as a result (Table 3). The workshops lasted from 1.5 to 3 hours. In most instances the FPs facilitated the workshops except for the ones in Kinshasa, which were facilitated by the Director of KSPH, DRC. At MUSOPH, Kenya, results of the assessment were also shared online with all staff including the Dean. As shown in Table 3, all dissemination workshops involved respondents as well as key decision makers.

Reconciling the results from the various sections of the tool proved challenging. Triangulation therefore allowed for perceptions to be reinforced or for misperceptions to be amended. For instance, at MUSPHSS, Tanzania, some faculty were unaware of services and resources such as databases and library access and, assuming they did not exist, rated these as low in the self-assessment. Similarly at MUSOPH, Kenya, faculty were under the impression that organizational policies were not available. At KHSPH, DRC, approximately 60% of interviewees agreed that the SPH places high priority on HSR and 70% agreed that KSPH has an adequate number of individuals with strong quantitative research skills who are interested in applying them to HSR, but when few, if any, HSR outputs were identified, these views were called into question. Discussions and debates in the workshops led to a realization that HSR was poorly understood in the case of KSPH, DRC, and that resources are often available but either not distributed or not actively accessed in the example of MUSPSS, Tanzania, and MUSOPH, Kenya.

Factors affecting the capacity assessment process

Four overarching factors influenced the nature and outcomes of the assessment process.

Leadership and collaboration

Several SPHs stressed the importance of support from senior leadership, particularly the Deans, as a key facilitator of the process. Furthermore, they mentioned that the technical and moral support from the HEALTH Alliance Africa Hub and JHU was critical, particularly during moments of delays, frustrations, and bureaucratic red tape. Building in room for flexibility in the application of tools as well as the timing allowed for a more rigorous process and an opportunity to document any deviations from protocol for learning purposes.

Conceptual understanding of HSR

A universal challenge centered on the term HSR. SPH faculty that do not engage much in HSR were usually unfamiliar with what HSR comprises. Thus, FPs had to explain to each person individually during the self-assessments what HSR is, assist them in determining if they are currently engaged or interested in engaging in HSR, and whether faculty research was eligible for consideration under the HSR label.

The lack of knowledge on HSR was overcome by explaining the operational definition by the hub that was created [by the HEALTH Alliance] in June 2011. Other times, HSR published papers were used as examples.” (FP, CPHMS, Ethiopia)

The lack of understanding of HSR in general affected the number of respondents, many of whom did not classify their own research as HSR and therefore deemed their participation unnecessary. Initial high levels of interest by faculty to participate in the assessment waned quickly resulting in FPs having to first determine why participation rates were low. At MUSOPH, Kenya, the FP approached each faculty member in his or her office to better understand the lack of interest. The purpose seemed unclear and length of the tool seemed to be overwhelming, thereby serving as deterrents. In the case of NURSPH, Rwanda, the FP explained that “once everyone in the institution knew what was happening, they were responsive. But those who weren’t informed from the onset, they thought they weren’t part of the initiative and didn’t take time to respond.

Once the reasons for low participation were determined, FPs explored innovative ways to encourage engagement. At KSPH, DRC, the FPs, with support from the Vice Dean of the School of Medicine, invited all senior and junior staff of the SPH to attend a journal club about HSR in advance of the self-assessment. At SPHUoN, Kenya, and MUSOPH, Kenya, the FPs had a sensitization session with department heads and board members during their strategic meetings. Informing staff that the assessment was the first step in a longer process in which capacity development plans were the ultimate aim was key to assuring them that their input was valuable and would lead to change.

Resources challenges

Perhaps the most-cited challenges were those regarding time and human resources. Although all teams met in June 2011 and agreed on a protocol, the time it took to receive institutional review board approval as well as contact all necessary key informants within the following 3 months proved overly optimistic. Furthermore, according to FPs at SPHUoN, Kenya, and MUSPHSS, Tanzania, the inclusion of extra team members to assist with the roll out also provided an opportunity to mentor junior faculty and students. Financial support was in the range of $7,000 per school and all SPHs felt that the budget was sufficient for such an exercise.

All schools were able to access digital recorders either by borrowing or buying them for the purposes of the assessment. In cases where key informants refused to be taped, extensive notes were taken to ensure that the necessary information was still captured. Key informants were sometimes located far away, so scheduling and keeping appointments proved challenging at times. As articulated by the MUSPHSS, Tanzania, FP, “The problem isn’t the number of interviewers; it is the availability of data and people to interview! If people don’t have time, or the data doesn’t exist – there is nothing that can be done, regardless of how many interviewers there are!” Each team, however, found a strategy that proved effective.

When I finally decided to just go to the [external stakeholder] offices and ask for 5 minutes of their time, I often received 30 minutes to an hour of discussion because they were interested. So it is negative that they didn’t appear to value academia when solicited by an email or call, but positive when they actually had a chance to engage in dialog.” (FP, MUSPHSS, Tanzania)

In the case of Kenya and DRC, many external persons completed the HSR priority setting exercise via email rather than through face-to-face interviews due to complex travel schedules or being located elsewhere in the country. These two contexts also permitted the teams to leverage existing networks and opportunities to access key informants as described below:

A web forum [of external stakeholders] in Congo [DRC] exists. Parts of the questions of the interview were sent to the forum and five responses came this way. Others were met individually after phoning them.” (FP, KSPH, DRC)

University of Nairobi is currently working on a curriculum for a Masters in Leadership and Health Services management. There was stakeholders workshop for these discussions and so I took advantage of that and managed all key informant interviews [for HSR priority setting] in one day.” (FP, SPHUoN, Kenya)

Data availability

Acquiring all of the necessary information for the institutional profile was difficult. For instance, garnering financial data relevant to grants and research at the university was frustrating due to fragmented accountability structures and aggregated accounting systems.

We had to meet with the project office and dig up data. We also sent a form to the principal investigators if they would be able to provide info on each of their research projects but this was difficult.” (FP, KSPH, DRC)

Documenting research outputs was cumbersome for two reasons. First, HSR was poorly understood and so faculty did not list publications that should be classified as HSR. Second, there were difficulties in locating other types of non-peer reviewed research outputs. In the case of CPHMS, Ethiopia, all research outputs were tabulated so as to ensure wide capture and to avoid missing HSR-relevant outputs. MakSPH, Uganda, had similar challenges:

We have a list of dissertations and went through each of them to tease out which are related to HSR. With respect to publications, we had to actually read much of the article to know if it was HSR and fit within the definition agreed upon during the June 2011 Uganda meetings. [We also] looked at courses that the faculty coordinate to determine if the individuals are HSR faculty.” (FP, MakSPH, Uganda)

FPs suggested the need to find a better way of classifying, archiving, and referencing research outputs. Short of creating a resource center, the FP at MUSOPH, Kenya, suggested that screening and analyzing research publications routinely submitted to the Dean’s office every year would be more efficient and reliable than requesting abstract submissions from each faculty researcher, as they did during this assessment. Similarly the FP at SPHUoN, Kenya, expressed that having access to the quarterly updates submitted by faculty to the Director of the SPH could have enhanced capture and documentation of outputs.

Outcomes of the capacity assessment exercise

In this section, we describe what transpired in each of the SPHs as a consequence of conducting the capacity assessment. Capturing the strengths, weaknesses, challenges, and opportunities of HSR in each SPH served as a useful basis for enhancing institutional strategic plans. As mentioned by the FP at KSPH, DRC, the assessment provided the justification for investment in capacity strengthening in each SPH and culminated in the creation of a capacity development plan for each SPH.

The school had no policies or guidelines on human resource development and so these have been recognized. Insufficient persons with qualitative research skills emerged from the assessment as well. Absence of an HSR agenda in the school was highlighted. Also, people realized that we at the SPH do not communicate very well. We do not (seem to) care about media. Knowledge translation need and capacity was low. It was a concept that was new and unrealized so it has become an area that KSPH is now paying more attention to.” (FP, KSPH, DRC)

The exercise also highlighted that HSR is embedded across multiple classes and courses, rather than having an identity in its own right. This may be problematic for students seeking to build HSR skills as they struggle to identify the right classes to take, as well as potentially undermining a systematic approach to teaching HSR.

Methods for HSR are different than Epidemiology and Biostatistics. It would be good to know from beginning about the differences – especially in the dissertations so that the right resources can be used to inform the student research.” (FP, MakSPH, Uganda)

Although the primary goal of the initiative was to conduct an assessment that would inform each SPH of its individual capacity to conduct HSR, all seven SPHs mentioned additional outcomes. For instance, discussions around the assessment raised the profile of HSR. These provided NURSPH, Rwanda, with a better sense of the range of research studies currently underway and where each falls in the HSR spectrum. MUSPHSS, Tanzania’s FP echoed this point by stressing that definitions of terms were not enough and that awareness raising and discussion were critical to ensuring a common understanding of HSR.

A notable outcome was that the assessment results and the outreach to external stakeholders provided insight about policy-relevant research agendas. Discussions with key informants seem to have increased the profile of some of the SPHs within their own countries. For instance, the lead role that KSPH, DRC, played in convening an HSR priority setting exercise led to renewed interest in involving KSPH, DRC, in healthcare reform in the DRC. Similarly, at CPHMS, Ethiopia, results of the assessment led the university’s Academic Commission to set up a new task force in early 2012 to investigate the reasons behind the low levels of research at the university and propose solutions. Moreover, the FP for the organizational assessment at CPHMS, Ethiopia, was asked to serve on this task force. The task force was subsequently supported by the establishment of a HSR team in September 2012. Exposure to external stakeholders such as colleagues at WHO and at the MOH was mentioned as another important outcome for the CPHMS, Ethiopia, team.

Last but not least, the exercise created a sense of collegiality amongst the seven schools that galvanized collective action for shared interests. For instance CPHMS, Ethiopia, initiated a workshop with JHU on HSR methods training for all partners; MUSOPH, Kenya, took the lead on creating a new course on HSR for all the partners and spearheaded the crafting of grant proposals for funding agencies; and all FPs united in the publication of several manuscripts showcasing the results of the capacity assessments [2527].

Discussion

The multidisciplinary nature of HSR inevitably proffers multiple interpretations of what it encompasses. Associated with this nebulous understanding is confusion about the types of individual skills and institutional structures needed to support HSR. While such problems have been recognized previously [4, 32], and were accordingly addressed in the original tool, the explanations and examples of HSR provided were not sufficient, resulting in a need to organize additional activities for faculty sensitization to HSR. Similar initiatives should consider raising awareness and providing clarity on time commitments as a first step in such an organizational assessment. Taking into consideration the extent to which HSR was a novel idea to some faculty, it would seem that interviewer-administered self-assessments were important for helping faculty understand the concept, clarifying questions, and enhancing completion rates of the assessments. The depth of responses, therefore, is likely to vary between assessments that were self-administered and those that were interviewer-administered. Similarly, the content of responses during face-to-face interviews is likely to vary significantly from those returned via email. Ultimately, the self-assessments provided faculty perceptions of organizational capacity. The institutional profiles provided more objective data regarding some of the issues that were considered in the self-assessments. Triangulation of sources served to validate the findings in some instances, particularly with objectively verifiable data (e.g., number of publications). However, in other cases (e.g., access to databases) the post-assessment workshops served as a conduit for dialogue to reconcile different perceptions and arrive at shared understandings.

Experiences across the seven schools suggest that it would be prudent to allocate approximately 6 months to a project such as this and target a period in the year when faculty and leadership have time to dedicate to the initiative. With the volume of information that needs to be collected as well as the skills and time needed for the various steps of the assessment, it would be necessary to enlist the assistance of 3 to 5 persons in addition to the FPs, depending on the size of the SPH.

When implementing an assessment across schools or even across departments, the importance of collective creation, consultation, and acceptance of the tools cannot be overemphasized. As was demonstrated in the study described in this paper and elsewhere [19, 2527, 33], creating an atmosphere of collaboration and deliberation at the very beginning of the assessment process offered fertile ground for shared learning throughout. Reviewing the assessment tools together and in person prior to implementation allowed for a better understanding of the content, an opportunity to pilot test and refine the questions, and the establishment of a standardized methodology across the seven schools. This uniformity provided a basis for cross-institutional comparison and learning similar to that experienced by Mirzoev et al. when performing HSR capacity assessments across a different set of African SPHs [19]. Furthermore, building in flexibility for adapting materials so that they correspond to the relevant contexts is equally critical in unearthing the reasons why differences across the institutions are observed. Given that all sections of the tool were important for gaining an understanding of the institution as a whole, as well as perceptions of staff, it may be necessary to consider alternative ways of administering the tool so that all sections could be completed more quickly and easily. Although access to key external stakeholders led to positive outcomes in several cases, the purpose of setting a research agenda was not wholly fulfilled. Alternative strategies for stakeholder engagement need to be explored, perhaps through more deliberative dialogues [34].

From an analysis of the process of executing HSR institutional capacity assessments across these seven SPHs, it would appear that four factors play a critical role: leadership support, a conceptual understanding of HSR, adequate resources, and data availability.

An ex post facto reflection such as this allows institutions to learn from the experiences of others. However, reliance on FPs as being the sole source of these reflections subjects the interpretations to response bias. Furthermore, the process evaluation occurred four months after the completion of the organizational capacity assessments and therefore may also suffer from recall bias. Given the collaborative nature of the project and the willingness to learn from the experiences of each of the SPHs, we expect that there is little incentive for FPs to skew their responses. Nevertheless, it may have been interesting to conduct interviews with other members of the assessment team, select respondents, and the Dean. Clearly, this paper has not sought to validate the items used in the self-assessment questionnaire or assess their reliability: our focus was principally on understanding the processes and consequences of implementing the capacity assessment tool. The sample size at each participating SPH in our study was too small for any rigorous assessment of validity or reliability, but future studies with larger samples could usefully contribute to this.

Conclusions

The HSR capacity assessments permitted various aspects of capacity at the SPHs to be explored and analyzed. The intent was to roll out a self-led process for capacity assessment in order to devise HSR capacity development strategies for the SPHs and identify where capacity development investments would have the greatest impact. The assessments led to a number of notable outcomes. First, the exercise galvanized interest in HSR among faculty and leadership across all seven SPHs. Second, the assessment was critical in the development of new capacity development plans for each of the SPHs. Third, it spurred action for implementing joint activities such as course development, organizational activities such as local awareness raising, and individual activities such as training in HSR. In addition, the SPHs are independently and collectively seeking funding for the various aspects of their capacity development plans. Fourth, the assessment stimulated the involvement and interest of key external stakeholders in collaborating with the SPHs. The tool (and exercise overall) has therefore demonstrated utility for generating much more than just an assessment of SPH capacity. It provides data that can be analyzed in a variety of ways [19, 2527] yielding unexpected and valuable side benefits. Furthermore, unlike other institutional capacity assessments that are intended for one particular organization [33] or externally sourced [8], this paper provides insight into an individually endogenous, collectively unified, and environmentally contextualized process across several organizations.

In addition to raising the profile of HSR in each SPH and among that country’s healthcare leadership, an assessment such as this inevitably raises expectations about the next steps. The gaps identified through such a process often require strategic policy and practice changes that are seldom overcome without financial investments. Without the financial and human resources to accompany recommended changes, any momentum gained from galvanizing staff motivation could diminish quickly. This leads one to wonder, what is the best way to plan for capacity development in tandem with resource mobilization with a goal of promoting organizational performance enhancement?

Authors’ information

SB is CEO of the Future Health Systems (FHS) research programme consortium. DL and NJ worked with FHS from Johns Hopkins University, and EE leads the Uganda FHS team.

Abbreviations

CPHMS:

Jimma University College of Public Health and Medical Sciences, Ethiopia

CHSRF:

Canadian Health Service Research Foundation

FP:

Focal person

HEALTH Alliance:

Higher Education Alliance for Leadership Through Health

HSR:

Health systems research

IDRC:

International Development Research Centre, Canada

JHU:

Johns Hopkins University, USA

KSPH:

Kinshasa School of Public Health, Democratic Republic of Congo

MakSPH:

Makerere University College of Health Sciences School of Public Health, Uganda

MUSOPH:

Moi University School of Public Health, Kenya

MUSPHSS:

Muhimbili School of Public Health and Social Sciences, Tanzania

NURSPH:

National University of Rwanda School of Public Health, Rwanda

SPH:

School of public health

SPHUoN:

University of Nairobi School of Public Health, Kenya.

References

  1. World Health Organization: The Mexico Statement on Health Research: Knowledge for Better Health: Strengthening Health Systems. 2004, Geneva: WHO

    Google Scholar 

  2. World Health Organization: The Bamako Call to Action on Research for Health: Strengthening Research for Health, Development, and Equity. 2008, Geneva: WHO

    Google Scholar 

  3. Bennett S, Agyepong IA, Sheikh K, Hanson K, Ssengooba F, Gilson L: Building the field of health policy and systems research: an agenda for action. PLoS Med. 2011, 8 (8): e1001081-10.1371/journal.pmed.1001081.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Bennett S, Adam T, Zarowsky C, Tangcharoensathien V, Ranson K, Evans T, Mills A, Agyepong I, Hu S, Lavis J, Makubalo L, Rannan-Eliya R, Sanchez D, Tomson G: From Mexico to Mali: progress in health policy and systems research. Lancet. 2008, 372: 1571-1578. 10.1016/S0140-6736(08)61658-X.

    Article  PubMed  Google Scholar 

  5. Whitworth JA, Kokwaro G, Kinyanjui S, Snewin VA, Tanner M, Walport M, Sewankambo N: Strengthening capacity for health research in Africa. Lancet. 2008, 372: 1590-1593. 10.1016/S0140-6736(08)61660-8.

    Article  PubMed  PubMed Central  Google Scholar 

  6. Kitua AY, Corrah T, Herbst K, Nyirenda T, Agwale S, Makanga M, Mgone CS: Strengthening capacity, collaboration and quality of clinical research in Africa: EDCTP Networks of Excellence. Tanzan J Health Res. 2009, 11: 51-54.

    CAS  PubMed  Google Scholar 

  7. Gadsby EW: Research capacity strengthening: donor approaches to improving and assessing its impact in low- and middle-income countries. Int J Health Plann Manage. 2011, 26: 89-106. 10.1002/hpm.1031.

    Article  PubMed  Google Scholar 

  8. Boyd A, Cole DC, Cho DB, Aslanyan G, Bates I: Frameworks for evaluating health research capacity strengthening: a qualitative study. Health Res Policy Syst. 2013, 11: 46-10.1186/1478-4505-11-46.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Minja H, Nsanzabana C, Maure C, Hoffmann A, Rumisha S, Ogundahunsi O, Zicker F, Tanner M, Launois P: Impact of health research capacity strengthening in low- and middle-income countries: the case of WHO/TDR programmes. PLoS Negl Trop Dis. 2011, 5: e0001351-

    Article  Google Scholar 

  10. Varkevisser CM, Mwaluko GM, Le Grand A: Research in action: the training approach of the Joint Health Systems Research Project for the Southern African Region. Health Policy Plan. 2001, 16: 281-291. 10.1093/heapol/16.3.281.

    Article  CAS  PubMed  Google Scholar 

  11. Manabe YC, Katabira E, Brough RL, Coutinho AG, Sewankambo N, Merry C: Developing independent investigators for clinical research relevant for Africa. Health Res Policy Syst. 2011, 9: 44-10.1186/1478-4505-9-44.

    Article  PubMed  PubMed Central  Google Scholar 

  12. Mayhew SH, Doherty J, Pitayarangsarit S: Developing health systems research capacities through north–south partnership: an evaluation of collaboration with South Africa and Thailand. Health Res Policy Syst. 2008, 6: 8-10.1186/1478-4505-6-8.

    Article  PubMed  PubMed Central  Google Scholar 

  13. Jönsson K, Tomson G, Jönsson C, Kounnavong S, Walhström R: Health systems research in Lao PDR: capacity development for getting research into policy and practice. Health Res Policy Syst. 2007, 5: 11-10.1186/1478-4505-5-11.

    Article  PubMed  PubMed Central  Google Scholar 

  14. Gonzalez Block MA, Mills A: Assessing capacity for health policy and systems research in low and middle income countries. Health Res Policy Syst. 2003, 1: 1-10.1186/1478-4505-1-1.

    Article  PubMed  PubMed Central  Google Scholar 

  15. Future Health Systems: Innovations for Equity. [http://www.futurehealthsystems.org/africa-hub/].

  16. Mock N, de Buhr E, Mukungo M, Wemakoy O: Public Health Training in the Democratic Republic of Congo: a case study of the Kinshasa School of Public Health. 2006, [http://www.jhsph.edu/research/affiliated-programs/global-research-activity/Congo.pdf]

    Google Scholar 

  17. Pariyo G, Serwadda D, Sewankambo NK, Groves S, Bollinger RC, Peters DH: A grander challenge: The case of how Makerere University College of Health Sciences (MakCHS) contributes to health outcomes in Africa. BMC Int Health Hum Rights. 2011, 11 (Suppl 1): S2-10.1186/1472-698X-11-S1-S2.

    Article  PubMed  PubMed Central  Google Scholar 

  18. Bates I, Ansong D, Bedu-Addo G, Agbenyega T, Akoto AYO, Nsiah-Asare A, Karikari P: Evaluation of a learner-designed course for teaching health research skills in Ghana. BMC Med Educ. 2007, 7: 18-10.1186/1472-6920-7-18.

    Article  PubMed  PubMed Central  Google Scholar 

  19. Mirzoev T, Lê G, Green A, Orgill M, Komba A, Esena R, Nyapada L, Uzochukwu B, Made W, Nxumalo N, Gilson L: Assessment of capacity for Health Policy and Systems Research and Analysis in seven African universities: results from the CHEPSAA project. Health Policy Plan. 2013, 1-11.

    Google Scholar 

  20. Organisation for Economic Co-operation and Development: The Challenge of Capacity Development: Working Towards Good Practice. 2006, Paris: OECD

    Google Scholar 

  21. Potter C, Brough R: Systemic capacity building: a hierarchy of needs. Health Policy Plan. 2004, 19: 336-345. 10.1093/heapol/czh038.

    Article  PubMed  Google Scholar 

  22. Green A, Bennett S: (Eds): Sound Choices: Enhancing Capacity for Evidence-Informed Health Policy. 2007, Geneva: WHO

    Google Scholar 

  23. Pang T, Sadana R, Hanney S, Bhutta ZA, Hyder AA, Simon J: Knowledge for better health – a conceptual framework and foundation for health research systems. Bull World Health Organ. 2003, 81: 815-820.

    PubMed  Google Scholar 

  24. Gyapong JO, Ofori-Adeji D: Capacity Building for Relevant Health Research in Developing Countries. 2006, Netherlands: Netherlands Organisation for International Cooperation in Higher Education

    Google Scholar 

  25. Simba D, Mukose A, Bazeyo W: Institutional capacity for health systems research in East and Central African Schools of Public Health: strengthening human and financial resources. Health Res Policy Syst. 2014, 12: 23-10.1186/1478-4505-12-23.

    Article  PubMed  PubMed Central  Google Scholar 

  26. Ayah R, Jessani N, Mafuta E: Institutional capacity for Health systems research in East and Central African Schools of Public Health: knowledge translation and effective communication. Health Res Policy Syst. 2014, 12: 20-10.1186/1478-4505-12-20.

    Article  PubMed  PubMed Central  Google Scholar 

  27. Nangami M, Rugema L, Tebeje B, Mukose A: Institutional capacity for Health systems research in East and Central African Schools of Public Health: enhancing capacity to design and implement teaching programs. Health Res Policy Syst. 2014, 12: 22-10.1186/1478-4505-12-22.

    Article  PubMed  PubMed Central  Google Scholar 

  28. Lusthaus C, Anderson G, Murphy E: Institutional Assessment: A Framework for Strengthening Organizational Capacity for IDRC’s Research Partners. 1995, Ottawa: International Development Research Centre

    Google Scholar 

  29. Bennett S, Corluka A: Health Policy Analysis Institutes: Landscaping and Learning from Experience. 2010, Geneva: WHO

    Google Scholar 

  30. Thornhill J, Judd M, Clements D: CHSRF knowledge transfer: (re)introducing the self-assessment tool that is helping decision-makers assess their organization's capacity to use research. Healthc Q. 2009, 12: 22-24.

    Article  PubMed  Google Scholar 

  31. Strauss AM: Qualitative Analysis for Social Scientists. 1987, Cambridge: Cambridge University Press

    Book  Google Scholar 

  32. Sheikh K, Gilson L, Agyepong IA, Hanson K, Ssengooba F, Bennett S: Building the field of health policy and systems research: framing the questions. PLoS Med. 2011, 8 (8): e1001073-10.1371/journal.pmed.1001073.

    Article  PubMed  PubMed Central  Google Scholar 

  33. Bates I, Akoto AYO, Ansong D, Karikari P, Bedu-Addo G, Critchley J, Agbenyega T, Nsiah-Asare A: Evaluating health research capacity building: an evidence-based tool. PLoS Med. 2006, 3 (8): e299-10.1371/journal.pmed.0030299.

    Article  PubMed  PubMed Central  Google Scholar 

  34. Moat KA, Lavis JN, Clancy SJ, El-Jardali F, Pantoja T, Knowledge Translation Platform Evaluation Study Team: Evidence briefs and deliberative dialogues: perceptions and intentions to act on what was learnt. Bull World Health Organ. 2014, 92: 20-28. 10.2471/BLT.12.116806.

    Article  PubMed  Google Scholar 

Download references

Acknowledgements

The authors would like to thank each of the Focal Persons from the seven HEALTH Alliance SPHs for their contributions to this paper: Richard Ayah, Suzanne Kiwanuka, Aggrey Mukose, Eric Mafuta, Mabel Nangami, Lawrence Rugema, Daudi Simba, and Bosena Tebeje. Without their diligence and willingness to be interviewed, it would not have been possible to capture the nuances of the capacity self-assessment process between the various contexts. Furthermore, we would like to acknowledge Aggrey Mukose, Chrispus Mayora, and Angela Nanyanzi for facilitating and coordinating the process, William Bazeyo, Director of the Africa Hub, and the Deans of the seven SPHs for their support. Furthermore, we express our gratitude to Michael Zierler for his thoughtful review and editorial contributions. The authors express their appreciation for the financial support (Grant #H050474) provided by the UK Department for International Development (DFID) for the Future Health Systems research programme consortium. This document is an output from a project funded by DFID for the benefit of developing countries. However, the views expressed are not necessarily those of DFID.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nasreen Jessani.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All four authors were closely involved in supporting the implementation of the self-assessment tool. SB designed the first draft of the tool. NJ, DL, and EE conducted the first workshop with representatives from the different SPHs where the tool was adapted to meet local needs (June 2011). NJ revised and finalized the tool upon user feedback. All four authors then participated in the analysis and dissemination workshop (December 2011), and reviewed the individual self-assessment reports. NJ and DL conducted semi-structured interviews with all seven of the lead faculty in the different SPHs to document the implementation process in each country. All authors read and approved the final manuscript.

Electronic supplementary material

12961_2013_323_MOESM1_ESM.docx

Additional file 1: Health Systems Research in the HEALTH Alliance Schools of Public Health: capacity development and collaborative research: Health Systems Research Capacity Assessment. This additional file contains the complete set of assessment tools that were used by the seven schools of public health that are members of the HEALTH Alliance. (DOCX 207 KB)

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jessani, N., Lewy, D., Ekirapa-Kiracho, E. et al. Institutional capacity for health systems research in East and Central African schools of public health: experiences with a capacity assessment tool. Health Res Policy Sys 12, 21 (2014). https://doi.org/10.1186/1478-4505-12-21

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1478-4505-12-21

Keywords