How Much Can a KAP Survey Tell Us About People

Published on December 2016 | Categories: Documents | Downloads: 35 | Comments: 0 | Views: 329
of 20
Download PDF   Embed   Report

Comments

Content

KAP These kinds of surveys are attractive as far as the survey design is simple, data are quantitative (countable), comparatively small size of sample is needed to generalize the results on the population, is time effectively implemented, gives the possibility for cross-cultural comparison. With KAP surveys we help our clients to identify the awareness of the population about the study issues. For instance, in the education sphere it might be to study what the population knows about ongoing reforms, which reforms has specifically heard about and how is the content of the reform interpreted? In case of survey in court system it might be to study the awareness of population about the laws, etc. Attitudes are connected to the individual’s knowledge, emotion, values and consequently it might change from positive to negative and vice versa. As the result of studying attitudes we identify the emotional attitudes/evaluation of individuals towards survey issues. For instance, does she/he like specific reforms and how much? How important he/she thinks the specific court reform is? What is her/his attitude towards specific action or fact, etc. Studying knowledge and attitude is not enough without studying existing practice/behavior. These kinds of surveys give the possibility to identify the behavior models of population towards study issues in detail. For instance, in the sphere of healthcare it might be the frequency of getting ambulatory service and its frequency, challenges in using the insurance, etc. Analyses of mentioned survey give the possibility what relationship / connection is between population knowledge, attitudes and behavior, how strong or weak is this connection? For instance, what influence will the rise of awareness on study issues will have on their behavior. Survey Methodologies “Perhaps the most important part of the survey process is the creation of questions that accurately measure the opinions, experiences and behaviors of the public. Accurate random sampling and high response rates will be wasted if the information gathered is built on a shaky foundation of ambiguous or biased questions.” (1) There are three main types of survey methodologies, and each has their own risks and benefits. Open-ended Questions Open-ended questions ask participants to come up with their own responses and allow the researcher to document the opinions of the respondent in his or her own words. These types of questions are useful for obtaining in-depth information on facts with which the researcher is not very familiar, opinions, attitudes and suggestions, or sensitive issues. Completely open-ended questions allow the researcher to probe more deeply into issues, thus providing new insights, bringing to light new examples or illustrations, and allowing for different interpretations and a variety of responses. Researchers who utilize open-ended questions must be skilled interviewers since they need to record all information to avoid loss of important information, and the analysis is time-consuming.(2) In addition, open-ended questions can be difficult to analyze statistically because the data is not uniform and must be coded in some manner.(3) Examples of open-ended questions:
 

“What do you think are the reasons some adolescents in this area start using drugs?” “What would you do if you noticed that your daughter (school girl) had a relationship with a teacher?”

Partially Categorized Questions

Partially categorized questions are similar to open-ended questions, but some answers have already been precategorized to facilitate recording and analysis. There is also usually an alternative titled “other” with a black space next to it. The advantages of these types of questions are that answers can be recorded quickly, and the analysis is often easier. One of the major risks is that the respondent will pre-categorize too quickly, resulting in a potential loss of interesting and valuable information. In addition, interviewers may try to force the information into the listed categories instead of exploring the question more thoroughly. If the respondent hesitates when answering a question, the interviewer may be tempted to present possible answers, causing bias. Therefore, the researcher must always avoid presenting possible answers to the study participant. Example of a pre-categorized open-ended question: “How did you become a member of the Village Health Committee?” (4) Categorize the response into these options:
    

Volunteered Elected at a community meeting Nominated by community leaders Nominated by the health staff Other ____

Closed Questions Closed questions have a list of possible answers from which respondents must choose. They can include yes/no questions, true/false questions, multiple choice, or scaled questions. Closed questions can be categorized into 5 different types:(5)


Multiple Choice- this question type is useful when the researcher would like participants to select the most relevant response. Likert Scale- this question type is appropriate when the researcher would like to identify how respondents feel about a certain issue. The scale typically ranges from extremely not important, not important, neutral, important, to extremely important, or strongly disagree, disagree, neutral, agree to strongly agree. Numerical- these questions are used when possible responses are numeric in form. For example, these questions are useful for asking someone’s age. Ordinal- these questions are useful when participants are asked to rank a series of responses. Categorical- this question type is appropriate when respondents are asked to identify themselves into a specific category. For example, they may be asked if they are male or female.





 

Closed questions are commonly used for obtaining data on background information such as age, marital status, or education. Closed questions may also be used to assess a respondent’s opinions or attitudes by choosing a rating on a scale. Additionally, closed questions may be used to elicit specific information in an efficient manner. For example, a researcher who is only interested in the sources of protein in a person’s diet may ask: Did you eat any of the following foods yesterday? (6)

    

Peas, beans, lentils (yes/no) Fish or meat (yes/no) Eggs (yes/no) Milk or cheese (yes/no) Insects (yes/no)

Closed questions are time efficient, and the responses are simple to compare with different groups or the same group over time. However, oftentimes closed questions yield data that is biased or invalid. For example, the uniformity in ratings may be deceptive and create bias. In addition, when respondents are illiterate, the interviewer may have to read the list of possible answers in a given sequence, thus introducing additional bias into the study. (7) Though closed questions are easier to analyze statistically, they seriously limit the range of participant responses. (8) The Impact of Choosing Open-ended or Closed-ended Questions The validity of a research study depends on a researcher’s selection of survey questions. The researcher’s decision to utilize open-ended or closed questions is therefore critical and significant. A question asked in an open-ended format can yield drastically different results from the same type of question asked in a closed format. For example, a poll conducted after the 2008 presidential election asked respondents about “what one issue mattered most to you in deciding how you voted for president?” After asking the question in an open-ended manner and in a closed-ended manner (where the options were the economy, the war in Iraq, health care, terrorism, energy policy, and other), researchers found that “when explicitly offered the economy as a response, more than half of respondents (58%) chose this answer; only 35% of those who responded to the open-ended version volunteered the economy. Moreover, among those asked the closed-ended version, fewer than one-in-ten (8.5%) provided a response other than the five they were read; by contrast fully 43% of those asked the open-ended version provided a response not listed in the closed-ended version of the question.” (9) This example illustrates how open-ended questions elicit a larger variety of responses and how closed-ended questions may potentially influence people into choosing a certain answer.(10) KAP Surveys Appropriateness and Challenges KAP surveys are focused evaluations that measure changes in human knowledge, attitudes and practices in response to a specific intervention. The KAP survey was first used in the fields of family planning and population studies in the 1950s. KAP studies use fewer resources and tend to be more cost effective than other social research methods because they are highly focused and limited in scope. KAP studies tell us what people know about certain things, how they feel, and how they behave. Each study is designed for a specific setting and issue.(11) “The attractiveness of KAP surveys is attributable to characteristics such as an easy design, quantifiable data… concise presentation of results, generalisability of small sample results to a wider population, cross-cultural comparability, speed of implementation, and the ease with which one can train numerators.” (12) In addition, KAP studies bring to light the social, cultural and economic factors that may influence health and the implementation of public health initiatives. “There is increasing recognition within the international aid community that improving the health of poor people across the world depends upon adequate understanding of the socio-cultural and economic aspects of the context in which public health programmes are implemented. Such information has typically been gathered through various types of crosssectional surveys, the most popular and widely used being the knowledge, attitude, and practice (KAP) survey.” (13)

KAP Research Protocols The basic elements of a KAP survey include: (14)


Domain identification: the domain is the subject of the KAP study, including the knowledge, attitudes and practices of a community in regard to that subject. Identification of the target audience Determination of sampling methods: KAP sampling methods usually use a survey or questionnaire administered through interviews. Analysis and reporting: KAP surveys strive to collect the least amount of information to determine whether the knowledge, attitudes and practices of a community have changed from one time period to another. For large sample sizes, computer software such as SPSS or Excel is recommended to organize and analyze the data. The findings from the data are usually presented using descriptive statistics, such as a table or histogram for each section (knowledge, attitudes and practices).(15)

 



KAP Surveys and Public Health KAP survey data are essential for informing public health work. For example, with regard to tuberculosis, a KAP survey can gather information about what respondents know about TB, what they think about people who have TB, the health system response to TB, and what care someone with TB should seek. KAP surveys are very helpful for identifying knowledge gaps, cultural beliefs, or behavioral patterns that may facilitate or create barriers to TB or other public health efforts. In addition, the data collected from KAP surveys enable program managers to set TB priorities, to establish baseline levels, and to measure change from interventions. KAP studies are also useful for studying other diseases, such as malaria. For example, a KAP study in Swaziland was used to provide baseline data about malaria knowledge, including attitudes and practices at the community level prior to implementing a malaria elimination strategy. The results showed that most participants exhibited a reasonable knowledge of malaria. However, the study found that there was a need for improving the availability of information regarding malaria through community channels. (16) A KAP survey can be conducted at any point during a public health intervention, but this type of survey is most useful when conducted in the early phases of the project and again after the intervention is completed.(17) KAP surveys have also been used to assess and improve the condition of reproductive health in developing countries. For example, a KAP survey was implemented in Kabul, Afghanistan, in order to contribute to a better understanding of the way Afghan women perceive their reproductive health and their reproductive health needs. This is importance since a socially integrated and culturally accepted approach is essential for public health initiatives involving reproductive health. The survey found that knowledge levels were very limited about family planning methods and STIs. The study also found that most women preferred institutional delivery and assistance by qualified health staff at birth, though the number of women who receive this care is very low.(18) Additionally, KAP surveys are relevant to public health awareness campaigns. “Before beginning the process of creating awareness in any given community, it is first necessary to assess the environment in which awareness creation will take place… Understanding the levels of Knowledge, Attitude and Practice will allow for a more efficient process of awareness creation as it will allow the program to be tailored more appropriately to the needs of the community.” (19) KAP surveys are also “useful tools for identifying the technological interventions which are important in an area and which are likely to create a significant impact. By analyzing the words farmers use to communicate their knowledge, attitudes, and practices in regard to specific elements of a farming system, it is possible to identify those elements which may be good, those which may need to be improved, or those which may need to be discouraged.” With this information, interventions can be more effectively designed. (20)

The Shortcomings of KAP Surveys Data Can Be Hard to Interpret Accurately One of the main shortcomings of KAP surveys is that it is difficult to ensure an accurate interpretation of data. Researchers should be very cautious regarding the interpretation of results. The reliability of the data can be frequently impacted by underlying contextual and cultural factors. For example, a study on the Yao women of Malawi asked women to agree or disagree with a variety of statements, and it was found that a very high proportion of the women responded with the “agree” answers. “One explanation could be that there is indeed strong agreement and cultural homogeneity among the Yao women. However, when taking into account the Yao women’s socio-cultural background, which often includes little formal (Western-style) schooling and which emphasizes the value of being non-confrontational, it is also possible that the question formulation can influence attitudes towards favourable, “agreeing” answers.”(21) This example illustrates the importance of being aware of the respondents’ cultural backgrounds when interpreting their responses. Lack of Standardized Approach to Validate Findings Most KAP surveys utilize household surveys. It is also important to consider the fact that social norms and pressures may bias reporting and that conducting household surveys may systematically exclude portions of the population. For example, when AIDs-related KAP surveys were conducted in rural Africa, the household surveys did not accurately reflect casual sexual activity, as prostitutes were not captured in a representative manner. In light of this fact, the researchers of that study suggest “that there is an urgent need for a standardized approach to validating the findings from AIDs-related KAP surveys.” (22) Moreover, in the same study, “data were found to be accurate at the aggregate level. However, accuracy of reporting at the individual level was found to be low. The gender difference in reporting of casual partners may be due to female underreporting, to not having captured prostitutes or to a different perception of the meaning of casual partnership.” (23) Therefore, it is necessary for all KAP surveys to include a validity analysis, so as to ensure the accuracy of the surveys and allow for comparison of the quality of different KAP surveys. “There is an urgent need for a standardized approach to validating the findings from AIDS related KAP surveys.”(24) Analyst Biases in KAP Surveys KAP surveys have not undergone extensive methodological scrutiny relative to the number of surveys conducted and their importance for social policy. Though KAP surveys are administered in many countries, the results are almost exclusively analyzed by Western researchers. “This fact suggests that technical proficiency and therefore, the quality of data generated clearly are considered more important to the process of analysis than familiarity with culture of data origin. That given survey data may derive from cultures and languages different from that of the analyst’s own has been ignored as a potential methodological problem.”(25)This is problematic since it is likely that the interpretations of KAP survey research data vary depending on the analyst’s degree of familiarity with the cultures and practices in the place of data origin. A study conducted in Bangladesh focused on this issue and analyzed differences in responses and interpretations to KAP surveys depending on the analyst’s exposure to Bangladeshi culture. The researchers found that “the Bengali analyses tended to be more directly relevant to program and policy development. The Bengali groups, in contrast to the Western groups, gave interpretations of the observed empirical relationships that dug beneath the superficial and external features of the problem (e.g. rates and facilities) to lay bare the basic causes of the problem.” (26) Thus, this study illustrates the importance of having analysts who are familiar with the culture and language of the country where the KAP surveys take place. “The findings of this study demonstrate that indigenous analysts tend to provide analyses that not only encompass the more context-free

analyses provided by foreign analysts but also contain information more directly related to the culture, which provide a flavor of the context.” (27) Other Criticisms A main criticism of KAP surveys is that their findings generally lead to prescriptions for mass behavior modification instead of targeting interventions towards individuals. For example, a study which used KAP surveys to study the AIDs epidemic found that “these unfocused inquiries into diffuse behaviors in undifferentiated populations are not productive in low-seroprevalence populations, especially when the objective is to design interventions to avert further infection. The failure of KAP surveys to distinguish conceptually between the relevance of AIDS-related behavioral data for individuals and for populations makes them fundamentally flawed for such purposes.” (28) Other major problems with KAP surveys are that investigators use the surveys to explain health behavior under the assumption that there is a direct relationship between knowledge and action. (29) A study on malaria control in Vietnam found that though respondents had a surprisingly high level of knowledge and awareness regarding malaria, “the findings are of limited value because of the lack of detail about and corroboration of self-reported adherence to preventive actions and health-seeking behaviors. Anecdotal evidence suggests there are deficiencies in these important practices, but the study design did not permit us to explore these.” (30)In addition, though KAP surveys provide descriptive data about practices and knowledge, they fail to explain why and when certain treatment practices are chosen. In other words, they fail to explain the logic behind treatment-seeking practices. (31) The Alternative KAP surveys can be useful when the research plan is to obtain general information about public health knowledge and sociological variables. However,“if the objective is to study health-seeking knowledge, attitudes and practices in context, there are suitable ethnographic methods available, including focus group discussions, in-depth interviews, participant observation, and various participatory methods.” (32) The preferred use of qualitative surveys and research is corroborated by a study on malaria control in Vietnam which found that though the KAP “survey generated useful findings, an initial, qualitative investigation (eg. observation and focus group discussions) to explore the large numbers of potential influences on behavior and exposure risk would have provided a more robust underpinning for the design of survey questions. This would have strengthened its validity and generated additional information.” (33) A study conducted by Agyepong and Manderson (1999) also confirms this notion and argues “that truly qualitative methods, such as observation, individual semi-structured interviews, or focus group discussions, are vital foundations for exploratory investigations at the community level, and should precede and underpin populationlevel approaches, such as KAP surveys.”(34) Conclusion The survey is critical to designing public health interventions and assessing their impact. There are a variety of different methodologies that can be used when designing surveys: open-ended questions, partially categorized questions, and closed ended questions. Each type of question has its own benefits and drawbacks, though partially categorized questions are considered to yield the most accurate and reliable data. KAP surveys explore respondents’ knowledge, attitudes and practices towards a particular topic. They are typically used for documenting community characteristics, knowledge, attitudes and practices that may serve to explain health risks and behaviors. Though they are very useful for obtaining general information about sociological and cultural variables, they are of limited validity if not grounded upon an initial qualitative research study or survey.(35) Footnotes

(1) “Questionnaire Design.” Accessed on 10 December 2010. <http://people-press.org/methodology/questionnaire/> (2) “Module 10B: Design of Research Instruments; Interview Guides and Interview Skills.” Accessed on 10 December 2010. <http://www.idrc.ca/en/ev-56614-201-1-DO_TOPIC.html> (3) Jackson, Sherri. Research Methods: A Modular Approach. (Belmont, California: Thomson Wadsworth, 2008) Accessed on 10 December 2010. <http://books.google.com/books?id=j09b2rTVRsAC&pg=PA91&lpg=PA91&dq=survey+question+methodologies+open +ended&source=bl&ots=noOj9Id0wk&sig=9DR3lHFvk7QcLatWLo8mOK-na6Y&hl=en&ei=QoYCTdWxNYP8Aao1JDnAg&sa=X&oi=book_result&ct=result&resnum=7&ved=0CEYQ6AEwBjgK#v=onepage&q&f=false> (4) “Module 10B: Design of Research Instruments; Interview Guides and Interview Skills.” Accessed on 10 December 2010. <http://www.idrc.ca/en/ev-56614-201-1-DO_TOPIC.html> (5) “Crafting Your Survey Questions: Open-Ended Versus Closed-Ended.” <http://www.articlesbase.com/businessarticles/crafting-your-survey-questions-openended-versus-closedended-1448771.html> (6) “Module 10B: Design of Research Instruments; Interview Guides and Interview Skills.” Accessed on 10 December 2010. <http://www.idrc.ca/en/ev-56614-201-1-DO_TOPIC.html> (7) Ibid. (8) Jackson, Sherri. Research Methods: A Modular Approach. (Belmont, California: Thomson Wadsworth, 2008) Accessed on 10 December 2010. <http://books.google.com/books?id=j09b2rTVRsAC&pg=PA91&lpg=PA91&dq=survey+question+methodologies+open +ended&source=bl&ots=noOj9Id0wk&sig=9DR3lHFvk7QcLatWLo8mOK-na6Y&hl=en&ei=QoYCTdWxNYP8Aao1JDnAg&sa=X&oi=book_result&ct=result&resnum=7&ved=0CEYQ6AEwBjgK#v=onepage&q&f=false> (9) “Questionnaire Design.” Accessed on 10 December 2010. <http://people-press.org/methodology/questionnaire/> (10) Ibid. (11) “Knowledge, Attitudes and Practices (KAP) Studies for Water Resources Projects.” Accessed on 10 December 2010. <http://files.dnr.state.mn.us/assistance/grants/community/6kap_summary.pdf> (12) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009). Accessed on 14 December 2010. <http://www.anthropologymatters.com/index.php?journal=anth_matters&page=article&op=viewArticle&path[]=31 &path[]=53> (13) Ibid. (14) “Knowledge, Attitudes and Practices (KAP) Studies for Water Resources Projects.” Accessed on 10 December 2010. <http://files.dnr.state.mn.us/assistance/grants/community/6kap_summary.pdf> (15) Ibid. (16) Hlongwana, K., et. al. “Community knowledge, attitudes and practices (KAP) on malaria in Swaziland: a country earmarked for malaria elimination.” Malar J. 8.29 (2009). Accessed on 14 December 2010. <http://www.malariajournal.com/content/8/1/29>

(17) “Advocacy, communication and social mobilization for TB control. A guide to developing knowledge, at titude and practice surveys.” Accessed on 10 December 2010. <http://www.tbtoolkit.org/assets/0/184/286/074e7ce8-e0dd4579-a26f-29ac680154ca.pdf> (18) “KAP Survey regarding reproductive health”. Accessed on 13 December 2010. <http://www.icrh.org/files/KAPsurveyKabulICRHIbnSina.pdf> (19) “KAP Study Protocol.” Accessed on 13 December 2010. <http://laico.org/v2020resource/files/KAPStudyMethodology.pdf> (20) “Participatory survey methods for gathering information.” Accessed on 13 December 2010. <http://www.fao.org/docrep/W8016E/w8016e01.htm> (21) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009). Accessed on 14 December 2010. <http://www.anthropologymatters.com/index.php?journal=anth_matters&page=article&op=viewArticle&path[]=31 &path[]=53> (22) Schopper, D., Doussantousse, S., Orav, J. “Sexual Behaviors Relevant to HIV Transmission in a Rural African Population. How much can a KAP survey tell us?” Soc. Sci. Med. 37.3 (1993):401-412. Accessed on 14 December 2010. <http://www.sciencedirect.com/science?_ob=MImg&_imagekey=B6VBF-4669683-SW8&_cdi=5925&_user=483702&_pii=027795369390270E&_origin=search&_zone=rslt_list_item&_coverDate=08%2F31 %2F1993&_sk=999629996&wchp=dGLzVzb-zSkzS&md5=52a0d46738ea46dcd2bdd67ee7e8bc7d&ie=/sdarticle.pdf> (23) Ibid. (24) Ibid. (25) Ratcliffe, J. “Analyst Biases in KAP Surveys: A Cross-Cultural Comparison.” Studies in Family Planning. Accessed on 13 December 2010. <http://www.jstor.org/stable/1965827> (26) Ibid. (27) Ibid. (28) Smith, H. “On the limited utility of KAP-style survey data in the practical epidemiology of AIDS, with reference to the AIDS epidemic in Chile.” Health Transit Rev. 3.1 (1993):1-16. Accessed on 13 December 2010. <http://www.ncbi.nlm.nih.gov/pubmed/10148795> (29) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009). Accessed on 14 December 2010. <http://www.anthropologymatters.com/index.php?journal=anth_matters&page=article&op=viewArticle&path[]=31 &path[]=53> (30) Quy Anh, N. “KAP Surveys and Malaria Control in Vietnam: Findings and Cautions about Community Research.” Southeast Asian J Trop Med Public Health. 36.3 (2005):572-577. Accessed on 13 December 2010. <http://www.tm.mahidol.ac.th/seameo/2005_36_3/05-3442.pdf>

(31) Launiala, A. “How much can a KAP survey tell us about people’s knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi.” Anthropology Matters. 11.1 (2009). Accessed on 14 December 2010. <http://www.anthropologymatters.com/index.php?journal=anth_matters&page=article&op=viewArticle&path[]=31 &path[]=53> (32) Ibid. (33) Quy Anh, N. “KAP Surveys and Malaria Control in Vietnam: Findings and Cautions about Community Research.” Southeast Asian J Trop Med Public Health. 36.3 (2005):572-577. Accessed on 13 December 2010. <http://www.tm.mahidol.ac.th/seameo/2005_36_3/05-3442.pdf> (34) Ibid. (35) Ibid.

How much can a KAP survey tell us about people's knowledge, attitudes and practices? Some observations from medical anthropology research on malaria in pregnancy in Malawi By Annika Launiala (University of Tampere and University of Kuopio, Finland) Anthropology Matters, Vol 11, No 1 (2009)

http://www.anthropologymatters.com/index.php?journal=anth_matters&page=article&op=viewArticle&path%5B%5 D=31&path%5B%5D=53 Knowledge, attitude, and practice (KAP) surveys are widely used to gather information for planning public health programmes in countries in the South. However, there is rarely any discussion about the usefulness of KAP surveys in providing appropriate data for project planning, and about the various challenges of conducting surveys in different settings. The aim of this article is two-fold: to discuss the appropriateness of KAP surveys in understanding and exploring health-related knowledge, attitudes, and practices, and to describe some of the major challenges encountered in planning and conducting a KAP survey in a specific setting. Practical examples are drawn from a medical anthropology study on socio-cultural factors affecting treatment and prevention of malaria in pregnancy in rural Malawi, southern Africa. The article presents issues that need to be critically assessed and taken into account when planning a KAP survey. Background: KAP surveys There is increasing recognition within the international aid community that improving the health of poor people across the world depends upon adequate understanding of the socio-cultural and economic aspects of the context in which public health programmes are implemented. Such information has typically been gathered through various types of cross-sectional surveys, the most popular and widely used being the knowledge, attitude, and practice (KAP) survey, also called the knowledge, attitude, behaviour and practice (KABP) survey (Green 2001, Hausmann-Muela et al. 2003, Manderson and Aaby 1992, Nichter 2008:6-7). The KAP survey tradition was first born in the field of family planning and population studies in the 1950s. KAP surveys were designed to measure the extent to which an obvious hostility to the idea and organisation of family planning existed among different populations, and to provide information on the knowledge, attitudes, and practices in family planning that could be used for programme purposes around the world (Cleland 1973, Ratcliffe 1976). In the 1960s and 1970s, KAP surveys began to be utilised for understanding family planning perspectives in Africa (Schopper et al. 1993). Around the same time, the amount of studies on community perspectives and human behaviour grew rapidly in response to the needs of the primary health care approach adopted by international aid organisations. Hence KAP surveys established their place among the methodologies used to investigate health behaviour, and today they continue to be widely used to gain information on health-seeking practices (Hausmann-Muela et al. 2003, Manderson and Aaby 1992). The attractiveness of KAP surveys is attributable to characteristics such as an easy design, quantifiable data, ease of interpretation and concise presentation of results, generalisability of small sample results to a wider population, crosscultural comparability, speed of implementation, and the ease with which one can train numerators (Bhattacharyya 1997, Stone and Campbell 1984). Nevertheless, over the years some researchers have criticised KAP surveys for taking for granted that the data provided offers accurate information about knowledge, attitudes, and practices that can be used for programme planning purposes (Cleland 1973, Nichter 1993, Pelto and Pelto 1997, Yoder 1997, see also Green 2001). A number of social scientists have also voiced their concern over the applicability of KAP surveys (Cleland 1973, Caldwell et al. 1994, Green 2001, Manderson and Aaby 1992, Nichter 1993, Ratcliffe 1976, Smith 1993). Yet in the international health community and among health programme planners, there is rarely any discussion about whether KAP surveys are an appropriate methodology to explore health-seeking practices that can be used for programme planning or not (Foster 1987). Lately there has not been much critical discussion among social scientists regarding this issue either. My experience with the use of KAP surveys for programme planning comes from Malawi, where I worked as a project officer for UNICEF from 1998 to 2001. During this time I was involved in several KAP surveys conducted by UNICEF in collaboration with their local partners. At that time KAP survey research was common practice in international health

(see also Nichter 2008:6-7) Why KAP surveys? In the UNICEF Malawi office, there were several reasons. First of all, there were Malawians who had received training on survey and quantitative research (compared to only two medical anthropologists in the entire country according to my knowledge). Secondly, surveys were easy to conduct, rather costeffectively, even nationwide. Thirdly, there was an assumption that the results could be generalised nationwide; and, moreover, the results, "hard numbers", could be used to show progress to the funding agencies. During my three years in UNICEF I became rather doubtful about the usefulness of KAP survey data in programme planning because we rarely discussed the data quality and thus the usefulness of the results (see also Gill 1993). As a matter of fact, the findings were used only to a limited extent for programme purposes. This problem was recognized by many of us, both by national and international staff working in UNICEF as well as Malawian counterparts, but due to time constraints and lack of skills and mechanisms for translating the results into practice, research reports were frequently underutilised. When I started to develop a PhD research plan in 2002 for a medical anthropology study on malaria in pregnancy, I was interested in adding a KAP survey to the study design to gain first-hand practical experience with the method and to clarify my doubts and concerns about it. Thus, in addition to in-depth interviews, focus groups discussions and participant observation, I carried out a KAP survey with the assistance of four local research assistant at the antenatal clinic of the Lungwena Health Centre and in six villages of the health centre catchment area. In the villages 200 interviews were aimed at altogether, with samples proportional to the size of the village. At the antenatal clinic, every third woman was selected over four days. This sampling procedure resulted in 248 interviews, 200 in the villages and 48 at the Health Centre (see more details Launiala and Kulmala 2006). The aim of this article is two-fold: to discuss the appropriateness of KAP surveys in understanding health-related knowledge, attitudes, and practices, and to describe some of the major challenges encountered in planning and conducting a KAP survey in this setting. This article is therefore divided into two main sections. The first section looks more closely at the main aspects of each element in a KAP survey, and the second concentrates on challenges encountered in the field. Throughout I will draw examples from the KAP survey I conducted as part of my PhD research on socio-cultural factors affecting treatment and prevention of malaria in pregnancy among the Yao in rural Malawi. Main aspects of a KAP survey Whose knowledge counts In KAP surveys, the knowledge part is normally used only to assess the extent of community knowledge about public health concepts related to national and international public health programmes. Investigation of other types of knowledge, such as culture-specific knowledge of illness notions and explanatory models, or knowledge related to health systems, e.g. access, referral, and quality, is highly neglected (Hausmann-Muela et al. 2003). Lack of investigation of illness notions and explanatory models is probably due to the fact that community knowledge is the embodied knowledge of explanatory illness models, and treatment practices. It is contextualised, practice-based, and emergent in times of illness, and, therefore, very difficult to detect using KAP surveys as pointed out by Nichter (1993). The narrow focus on knowledge can further be explained by the definition of knowledge and the agreement on whose knowledge counts. Pelto and Pelto (1997) have pointed out that public health professionals usually share the view that knowledge and beliefs are contrasting terms. They have an implicit assumption that knowledge is based on scientific facts and universal truths (refers to "knowing" about biomedical information). In contrast, beliefs refer to traditional ideas, which are erroneous from the biomedical perspective, and which form obstacles to appropriate behaviour and treatmentseeking practices (see also Good 1994). This narrow definition of knowledge is also shared by international health communities. While they have recognized the role and engagement of communities in the management and prevention of diseases, such as malaria and acute respiratory infections (ARI), they still fail to recognize the value of the knowledge that the communities possess (Nichter 1993). There is, however, no specific reason why knowledge related to health systems are rarely investigated in KAP surveys.

In anthropology, knowledge and beliefs are not contrasting terms (Pelto and Pelto 1997). In my study, I considered Yao women's knowledge to include local indigenous knowledge and beliefs, and biomedical knowledge. For example, during the qualitative phase of my study I investigated the meaning of malungo (a local word used for malaria), and the results revealed that malungo is an ambiguous term with multiple meanings and definitions, which are used interchangeably to refer to many types of feverish illnesses, not just malaria. More than 10 different types of malungo were noted, but I observed that these local categories were vague, ambiguous and not shared by all members of the community. Instead categories were produced and reproduced in encounters with other community members (Launiala and Kulmala 2006). In the KAP survey I also tried to go beyond public health and biomedical knowledge by investigating types of malungo. I asked: "Are there different kinds of malungo?", and 75% (n=248) of respondents said no, meaning that there is only one type of malungo. Of the remaining 25%, who said yes and were further asked to name the different types in an openended question, the majority said there were two types: normal malungo and/or severe malungo. This KAP survey result showed the difficulty (and pointlessness) of asking questions related to local notions of illnesses in the format of a KAP survey. Attitudes - can they be measured? Measuring attitudes is the second part of a standard KAP survey questionnaire. However, many KAP studies do not present results regarding attitudes, probably because of the substantial risk of falsely generalising the opinions and attitudes of a particular group (Cleland 1973, Hausmann-Muela et al. 2003). In everyday English, the term attitude is usually used to refer to a person's general feelings about an issue, object, or person (Petty and Cacioppo 1981). Furthermore, attitudes are interlinked with the person's knowledge, beliefs, emotions, and values, and they are either positive or negative. Pelto and Pelto (1994) have also described causal attitudes or erroneous attitudes, which are considered derivatives of beliefs and/or knowledge. The act of measuring attitudes via a survey has been criticised for many reasons. When confronted with a survey question, people tend to give answers which they believe to be correct or in general acceptable and appreciated. Sensitive topics are particularly demanding. The survey interview context may influence the answer; whether the interview is conducted at a clinic or in a village, whether there are other people present, etc. The question formulation can be manipulative towards a favourable answer. Sometimes, the respondents may be uninformed about the issue and thus find it strange, but their attitudes are nonetheless measured. On occasion, the attitude scales (numbers/verbal) may fail to reflect the respondents' answers (Cleland 1973, Hausmann-Muela et al. 2003, Pelto and Pelto 1994). I also included a section on attitudes in the KAP questionnaire that I used, following the typical statement formulation with three response alternatives ("agree", "not sure", "disagree"). I formulated the statements based on some key findings from the qualitative phase of my study, with the purpose of obtaining a clearer picture regarding whether these findings were shared by a larger proportion of Yao women, or if they were just solitary statements by individuals. The questionnaire contained 11 statements altogether. In addition to three response alternatives, I added an open-ended question to "disagree" responses in order to gain some understanding of why respondents disagreed. Moreover, I instructed the assistants to mark down when a respondent said that she did not know the answer. Analysis of the results raises some concerns about the possibility of measuring attitudes through a questionnaire. The high proportion of "agree" answers was eye-catching. In nine statements out of 11, less than 10% (24/248) disagreed, between 63% (157/248) and 99% (246/248) agreed, and between 2% (5/248) and 29% (72/248) were not sure. There were slightly more "agree" answers among the women who gave responses at the antenatal clinic than among the village respondents. There was only one statement to which there were more disagreeing than agreeing answers. This statement concerned the role of the maternal uncle: "You ask advice from your maternal uncle when you are severely sick." To this, only 39% (97/248) agreed.

There may be several explanations for the high proportion of agreeing answers. One explanation could be that there is indeed strong agreement and cultural homogeneity among the Yao women. However, when taking into account the Yao women's socio-cultural background, which often includes little formal (Western-style) schooling and which emphasises the value of being non-confrontational, it is also possible that the question formulation can influence attitudes towards favourable, "agreeing" answers. For example, 99% (246/248) agreed to the statement which put forth the argument: "You can trust the advice and medication given by the nurses, because nurses are educated." This statement was formulated on the basis of the qualitative findings, but the problem is that it would require a lot of courage from the women to disagree with this statement even if they thought otherwise. I concur with other studies (Cleland 1973, Hausmann-Muela et al. 2003) that researchers should be very cautious regarding the interpretation of results related to attitude measurement. It is important to take into consideration the underlying contextual factors that affect the reliability of the data. One way to improve the reliability of measuring attitudes is to transform some of the attitude statements into direct questions in the other sections and to assess whether there is any discrepancy between the results or not. What does a KAP survey tell us about practices? A third and integral part of KAP surveys is the investigation of health-related practices. Questions normally concern the use of different treatment and prevention options and are hypothetical. KAP surveys have been criticised for providing only descriptive data which fails to explain why and when certain treatment prevention and practices are chosen. In other words, the surveys fail to explain the logic behind people's behaviour (Hausmann-Muela et al. 2003, Nichter 1993, Pelto and Pelto 1994, Yoder 1997). Another concern is that KAP survey data is often used to plan activities aimed at changing behaviour, based on the false assumption that there is a direct relationship between knowledge and behaviour. Several studies have, however, shown that knowledge is only one factor influencing treatment-seeking practices, and in order to change behaviour, health programmes need to address multiple factors ranging from sociocultural to environmental, economical, and structural factors, etc. (Balshem 1993, Farmer 1997, Launiala and Honkasalo 2007). I was aware of the limitation of KAP surveys when it came to explaining the logic behind treatment-seeking practices and the difficulty of formulating a structured question to elicit these practices, and thus I included very few questions about this subject in my KAP questionnaire. I had one question about the time elapsed between onset of symptoms and treatment, thinking that this was a relatively straightforward question. According to the results, 26% (63/240) received treatment between one to three hours, 25% (60/240) within 24 hours, 5% (13/240) within 2 days, but 41% (99/240) fell into the category "other". Those who said "other" were asked to specify their answer. The most typical explanation was either "immediately upon attack", or that they "took pills" (ranging from panado, aspirin, or fansidar to penicillin) when symptoms appeared. It seems that the respondents interpreted the meaning of the question differently. The time categories seemed not to make much sense. Some respondents wanted to emphasise that they take pills immediately when symptoms appear. The problem was, however, that these answers did not explain what pills were taken for what symptoms and why. The answer "taking medication immediately" is also questionable based on findings from the qualitative data and from other social scientific studies explaining treatment-seeking practices (for example Agyepong and Manderson 1994, Hausmann-Muela et al. 1998, Nyamongo 2002). The choice of treatment depends on the severity of the symptoms. It is common that people wait and see how the symptoms develop before deciding on the choice of treatment. Mild fever is commonly treated with panado and aspirin at home, and if the fever persists, one may visit a health centre, or if the fever develops to high fever causing convulsion, relatives may seek treatment from a traditional healer. During the past decades there has also been discussion concerning informant accuracy in reporting past events and how accurately the reporting reflects reality. According to Bernard et al. (1984:503), "on average, about half of what informants report is probably incorrect in some way", causing major concern regarding the validity of the data. All this

suggests that analysis of survey research should pay more attention to the interpretation and elaboration of results. Understanding and taking account of the research context should also be a prerequisite for survey research as it is for any kind of qualitative research. Challenges encountered in the field and some explanations for them Despite the effort to avoid the weaknesses of survey research through careful planning, I encountered several practical challenges that require discussion (see Cleland 1973, Ratcliffe 1976, Stone and Campbell 1984; see also O'Barr et al. 1973 and Ross and Vaughan 1986). The first challenge was the translation of the questionnaire: each of my research assistants translated part of the questionnaire from English into Chiyao. Then I exchanged the translations among the assistants and asked them to translate the questionnaire back into English. This exercise showed how difficult translation is, because meanings change. For example, the Yao use the word malungo to refer to malaria, but the meaning of malungo is complex as it can be used to refer to any feverish disease, its meaning varying from body pains to fever and malaria (Launiala and Kulmala 2006). So, sometimes the research assistants translated malungo as fever, sometimes as body pains, and sometimes as malaria, complicating the formulation of the questions and the interpretation of the results. Another experience of how the meanings of the questions can change occurred after I had returned home from the field. During data analysis I needed to double-check the translation of two questions regarding fever in pregnancy. I sent the questions (in Chiyao) back to Malawi and asked my research assistants to translate them back into English. Surprisingly (or perhaps predictably), the meaning of both questions changed from the original, making it questionable to use the results based on these questions, which also caused doubts about the validity of the other results. I came across this problem of changing meanings already during the in-depth interviews that I conducted (with a research assistant who acted as an interpreter), but due to the nature of the interview method, I was able to better confirm the concepts and meanings used during the interviews. In surveys, the control of cultural reinterpretation of questions is more difficult, because of the lack of in-depth conversation and because a number of different research assistants are often used to collect the data. There are several explanations for these linguistic challenges. Chiyao, the language spoken in the area in which I was conducting research, is an exclusively oral language, containing concepts and words with no vernacular equivalents in English, and vice versa. And among the Yao, knowledge and information exchange often occurs through various social networks (Soldan 2004). Furthermore, the Yao have a specific explanatory model for malaria that is embedded in local cultural understandings, which affects their perception, knowledge, conceptualisation, treatment-seeking practices, and so on (Helitzer-Allen and Kendall 1992, Launiala and Kulmala 2006), similarly to many other ethnic groups in sub-Saharan countries (Hausmann-Muela et al. 1998, Kengeya-Kayondo et al. 1994, Nyamongo 2002, Winch et al. 1996). Researchers with western scientific training often have little knowledge, understanding and sensitivity concerning the socio-cultural context in which they conduct their studies. This cultural gap between researchers with western scientific training and local respondents may not only cause misinterpretation and cultural reinterpretation of questions, but also throws up constraints to data analysis (Ratcliffe 1976, Stone and Campbell 1984). According to Stone and Campbell (1984), many surveys conducted in rural areas in the South can be faulted for failure to meet even the fundamental requirement of formulating questions in meaningful local categories that make sense to the respondents. The authors found yet a bigger problem in concepts which evoke special meanings and associations in respondents, who then give their answer based on the meaning (connotation) of the question rather than the formal content. For example, in their study in Nepal, a great number of "don't know" answers were received, because many of the respondents had interpreted the question "Have you heard of abortion?" as asking about knowledge of abortion technique or about knowledge of who had had an abortion. Thus, even when a questionnaire is designed using local concepts and the questions are formulated on the basis of culture-specific data, it is challenging to control misunderstandings, changing

meanings, and cultural reinterpretations. One way to try to address this challenge is to loosen the time schedule (if possible) and to spend time every day going through the survey responses together with the research assistants (thus including continuous quality check up and training). Another problem I encountered was the difficulty of obtaining information concerning sensitive topics, e.g. use of traditional healers and witchcraft. Although I had used cultural information to formulate the questions, it was inadequate to overcome the problem. For example, concerning causes of complications during pregnancy, 72% (178/248) of the respondents knew of no cause. Only 8% (20/248) mentioned witchcraft as the cause of complications, yet during the focus group discussions and in-depth interviews, women told several stories related to e.g. miscarriage, complications, and even maternal death caused by witchcraft. There are several explanations for this. Firstly, many Malawians still feel uncomfortable expressing their negative feelings and opinions openly, and discussing sensitive issues such as traditional healers and witchcraft. According to some of my Malawian colleagues in UNICEF, this was in large part due to the oppressive era of Kamuzu Banda (1964-1994). Under Banda's rule, people lived in constant fear because there were spies everywhere, and people were known to disappear as a consequence of saying and doing the wrong things. During Banda's time, the use of traditional healers was also strictly prohibited and sanctioned. Nevertheless, most Malawians have a strong belief in witchcraft (ufiti) as an active force. Its presence in everyday life becomes explicit in the so called ufiti discourse used to maintain or preserve a mystical construct, to stop a certain direction of discourse, and to serve as the ultimate explanation (Englund 1996, Lwanda 2002). My experience is that a questionnaire is a poor instrument to gather information on sensitive issues, because it does not allow for building rapport between the interviewer and the respondent. I was also interested in reaching beyond the "yes" and "no" answers, and therefore included open-ended questions for additional explanations in my KAP questionnaire. The results were, however, rather disappointing; they contained little information and many questions were left unanswered. On the other hand this weakness could be due to the fact that I unintentionally emphasised quantity over quality, as I did not put any limitation to the number of survey interviews conducted per day. So the faster the research assistants managed to complete the surveys, the sooner they received their salary to support their families. On the other hand, there is always the pressure of time in the field, too. We were rushing through the survey because of anticipation that the rainy season would start any day, making it difficult to reach the respondents in the villages. I too was busy conducting interviews and did not constantly follow up the work of the research assistants every day. The skills and enthusiasm of research assistants also vary and some are better at probing than others. All these factors may lead to many of the open-ended questions being left unanswered. I also encountered the problems caused by the previous training of the other research teams; unlearning previous ways of collecting data proved hard. There are a limited number of research assistants available at the study area, yet there have been many research teams over the years. This means that the same research assistants are employed in many studies, all having different aims and methods. The previous research teams conducting surveys had trained the assistants to probe the alternatives but I wanted the assistants not to probe unnecessarily, and to make a note when alternatives were probed. As a result some assistants experienced difficulties in learning to avoid unnecessary probing, and others forgot to indicate when they had probed the alternatives. As pointed out by Cleland (1973) already in the 1970s, probing or non-probing makes a difference to the results. My advice to proceed cautiously with probing the alternative answers led to a high number of "don't know" responses. An alternative explanation, however, is that women's knowledge most often concentrates on local, indigenous issues, and they had little or nothing to say when presented with questions emphasising public health and biomedical knowledge. Or the women may have been worried about giving wrong answers, or may have been afraid of answering, especially if their relatives and other community members were present, as was often the case in the villages. Lastly, there was also the problem of courtesy bias, meaning that respondents produced answers which they believed that the research assistants and health centre staff wanted to hear. Malawians are a polite people, and, disliking the

idea of conflict, they rarely refuse to participate in a survey. This may cause a problem of courtesy bias, as reported in many studies criticising the use of surveys (Bhattacharyya 1997, Stone and Campbell 1984). The courtesy bias could have been further worsened by the fact that most respondents continuously assumed that this survey had something to do with the Lungwena health centre, which may have made them worry about what type of treatment they would receive if they were critical towards the services and care provided by the health centre. For example, answers to the survey questions related to the use of the antenatal clinic's services seemed to be positive, yet during the in-depth interviews women voiced their criticism towards the antenatal clinic's services. The problem of courtesy bias is further strengthened by the fact that local people are used to receiving money or goods in exchange for their knowledge. Interestingly, an unpublished report from a results dissemination meeting in the present study area shows that, given an opportunity, people are willing to express their concerns and even negative opinions about surveys (TUMCHP 2005). The report revealed that the community in question was tired of participating in the many on-going research activities. Many of the community members did not even differentiate between the different studies, and lacked understanding of the aims of these studies. Also, they expected handouts from the researchers after taking part in surveys, especially after answering very long questionnaires. Furthermore, they felt that some researchers exploited them by asking intimate questions about sexual issues and, at the same time, their private lives, as such questions are considered culturally inappropriate and against their moral code. More critical discussion about the challenges encountered in the field is needed My experience in using a standard KAP survey questionnaire to collect data on knowledge, attitudes and practices concerning malaria in pregnancy in rural Malawi strengthens my opinion that as a method the KAP survey contains several weaknesses. Some of these weaknesses can be overcome through careful planning, pre-testing and training of research assistants. A bigger concern, however, is the appropriateness of a KAP survey to collect data, particularly on attitudes and practices, and the way the results are interpreted without contextual understanding. Ratcliffe (1976) argued already in the 1970s that uncritical reliance on a KAP survey's ability to produce accurate data combined with limited comprehension of the socio-cultural context is likely to deliver a narrow understanding of the underlying factors, or even worse, a bogus interpretation of data. Another major problem is that many investigators use KAP surveys to explain health seeking behaviour assuming that there is a direct relationship between knowledge and action, as pointed out by Hausmann-Muela et al. (2003). I agree with other authors that a KAP survey is a poor method for obtaining information about sensitive issues, such as traditional treatment and prevention practices, and sexual behavior (Schopper et al. 1993, Smith 1993). At the most, it can be used to assess people's knowledge about practices in general, but not about their actual day-to-day practices and the explanations behind them (Hausmann-Muela et al. 2003, Nichter 1993). I also agree with Radcliffe (1976) and Stone and Campbell (1984) who have argued that any kind of survey questionnaire is a rather unnatural method for collecting information in a rural setting in a non-Western culture. The name "Knowledge, Attitudes, and Practice" (KAP) survey itself gives a misleading impression that we can easily use a KAP survey to collect data on health seeking practices and that this will be useable for programme planning. Professionals working in international health do not often have thorough methodological research training and thus they may take the use of certain methods for granted. What is lacking in today's scientific discussion is an analytical discussion about the strengths and weaknesses in survey design and methodology, and the limitations of survey research in general. Often the data collection process is described superficially, following the standard procedures and leaving out the contextual description. Yet an open and transparent discussion is the only way to improve methods and to learn from mistakes. Presumably, despite the weaknesses of survey questionnaires, they will still be used for data collection in settings across the world. Therefore, I would argue that in addition to the open discussion of the limitations of the method, minimum prerequisites are to carefully consider what type of information can be collected with a

questionnaire and to take into account the socio-cultural context in both the planning stage and when interpreting the results. Within public health programme research there has been an increasing trend towards multiple-method designs composed of a variety of qualitative and quantitative methods, in order to lessen the limitations of single method designs (Bhattacharyya 1997, Stone and Campbell 1984, see also Lambert and McKevitt 2002). The use of a multiplemethod design allows contextualisation of knowledge and makes it possible to understand the logic behind treatmentseeking practices. One of the advantages of combining qualitative and quantitative methods is that it increases the validity of data if the study is appropriately designed. One should, however, keep in mind that any successful research outcome depends heavily on the skills of the researcher. Conclusion: the challenges and value of interdisciplinarity Today's health problems around the world cannot be solved by any discipline alone. The way forward is to enhance interdisciplinary cooperation between the social sciences and medicine. There are, however, challenges that need to be overcome before true interdisciplinarity can be achieved. One such challenge to anthropologists is to find a way to communicate with medical professionals and to be able to argue convincingly what anthropology can offer (Pool and Geissler 2007). When working with UNICEF in Malawi I often encountered resistance concerning my "anthropological" ideas and suggestions, presumably because most of my colleagues failed to understand the relevance of the study, and because I was unable to explain my ideas convincingly using the appropriate "public health language". Napolitano and Jones (2006) have described similar problems and experiences among public health practitioners in the UK and in the Gambia, referring to their limited understanding of anthropology and its contributions, and the existence of ethnocentric fears. Some anthropologist might wonder why we should make an effort if medical researchers are perceived not to be taking any steps towards understanding anthropology. I guess it all depends on what drives us to do research. My motivation for trying to enhance collaboration between medicine and medical anthropology is based on the hope that in the end it will improve the well-being of rural Malawians. Conducting a KAP survey in a rural African setting - and in other types of settings as well - is problematic for a number of reasons. These problems and challenges should be openly discussed in scientific publications and communicated to programme planners. A KAP survey can be useful when the research plan is to obtain general information about public health knowledge regarding treatment and prevention practices, or about sociological variables, such as income, education, occupation, and social status. It is important, however, to know and understand what type of data can be generated by which method, and to choose appropriate methods in relation to the study objectives. If the objective is to study health-seeking knowledge, attitudes, and practices in context, there are suitable ethnographic methods available, including focus group discussions, in-depth interviews, participant observation, and various participatory methods. A combination of qualitative and quantitative methods may also prove effective, but I believe that the best value can be achieved only when the research team consists of experts from both qualitative and quantitative research traditions. Anthropologists working in international and public health should strive to find ways to enhance true interdisciplinarity. References Agyepong, I. and L. Manderson. 1994. The diagnosis and management of fever at household level in the Greater Accra Region, Ghana. Acta Tropica 58, 317-330. Balshem, M. 1993. Cancer in the community: Class and medical authority. Washington, DC: Smithson Inst. Press. Bernard, H. R., P. Killworth, D. Kronenfeld and L. Sailer. 1984. The problem of informant accuracy: The validity of retrospective data. Annual Review of Anthropology 13, 495-517.

Bhattacharyya, K. 1997. Key informants, pile sorts, or surveys? Comparing behavioral research methods for the study of acute respiratory infections in West Bengal. In The anthropology of infectious diseases: Theory and practice on medical anthropology and international health (eds) M. C. Inhorn and P. J. Brown, 211-238. Amsterdam: Routledge Publishers. Caldwell, J.C., P. Caldwell, and P. Quiggen. 1994. The social context of AIDS in sub-Saharan Africa. New York: Population Council. Cleland, J. 1973. A critique of KAP studies and some suggestions for their improvement. Studies in Family Planning 4(2), 42-47. Englund, H., 1996. Witchcraft, modernity and the person: The morality of accumulation in Central Malawi. Critique of Anthropology 16(3), 257-279. Farmer, P.E. 1997. Social scientists and new tuberculosis. Social Science and Medicine. 44(3), 347-358. Foster, G. M. 1987. World Health Organization behavioural science research: Problems and prospects. Social Science & Medicine 24(9), 709-717. Gill, G. J. 1993. O.K., the data's lousy, but it's all we've got (being a critique of conventional methods). Gatekeeper Series no. 38. London: International Institute for Environment and Development (www.iied.org). Good, B. 1994. Medicine, rationality and experience: An anthropological perspective. Cambridge: Cambridge University Press. Green, C. E. 2001. Can qualitative research produce reliable quantitative findings? Field Methods 13(3), 3-19. Hausmann-Muela, S., R. J. Muela and M. Tanner. 1998. Fake malaria and hidden parasites - the ambiguity of malaria. Anthropology and Medicine 5(1), 43-61. Hausmann-Muela, S., R. J. Muela and I. Nyamongo. 2003. Health-seeking behaviour and the health system's response. DCPP Working Paper no. 14. Helitzer-Allen, DL. and C. Kendall. 1992. Explaining differences between qualitative and quantitative data: A study of chemoprophylaxis during pregnancy. Health Education Quaterly 19, 41-54. Kengeya-Kayondo, J. F., J. A. Seeley, E. Kajura-Bajenja, E. Kabunga, E. Mubiru, F. Sembajja and D. W. Mulder. 1994. Recognition: treatment seeking behaviour and perceptions of cause of malaria among the rural women in Uganda. Acta tropica 58, 255-266. Lambert, H. and C. McKevitt. 2002. Anthropology in health research: From qualitative methods to multidisciplinarity. British Medical Journal 325, 210-213. Launiala, A. and M-L. Honkasalo. 2007. Ethnographic study of factors influencing compliance to intermittent preventive treatment of malaria during pregnancy among Yao women in rural Malawi. Transactions of the Royal Society of Tropical Medicine and Hygiene 101(10), 980-989. Launiala, A. and T. Kulmala. 2006. The importance of understanding the local context: Women's perceptions and knowledge concerning malaria in pregnancy in rural Malawi.Acta Tropica 98, 111-117. Lwanda, J. 2002. Tikutha: the political culture of the HIV/AIDS epidemic in Malawi. In A democracy of chameleons: Politics and culture in the new Malawi (ed.) H. Englund, 151-165. Blantyre: Christian Literature Association in Malawi.

Manderson, L. and P. Aaby. 1992. An epidemic in the field? Rapid assessment procedures and health research. Social Science & Medicine 35(7), 839-50. Napolitano, D. A. and C. O. H. Jones. 2006. Who needs "pukka anthropologists"? A study of the perceptions of the use of anthropology in tropical public health research.Tropical Medicine & International Health 11(8), 1264-1275. Nichter, M. 1993. Social science lessons from diarrhea research and their application to ARI. Human Organization 52(1), 53-67. ---------. 2008. Global health: Why cultural perceptions, social representations, and biopolitics matter. Tuscon: University of Arizona Press. Nyamongo, I. K. 2002. Health care switching behaviour of patients in a Kenyan rural community. Social Science & Medicine 54(3), 377-386. O'Barr, W., D. Spain and M. Tessler. 1973. Survey research in Africa: Its applications and limits. Evanston, IL: Northwestern University Press. Pelto, J. P., and G. H. Pelto. 1997. Studying knowledge, culture, and behavior in applied medical anthropology. Medical Anthropology Quarterly 11(2), 147-163. Petty, R. E., and J. P. Cacioppo. 1981. Attitudes and persuasion-classic and contemporary approaches. Dubuque, IA: W. C. Brown Co. Publishers. Pool, R., and W. Geissler 2007. Medical anthropology: Understanding public health. Berkshire: Open University Press. Ratcliffe, J. W. 1976. Analyst biases in KAP surveys: A cross-cultural comparison. Studies in Family Planning 7(11), 322330. Ross, D. A., and J. P. Vaughan. 1986. Health interview surveys in developing countries: A methodological review. Studies in Family Planning 17(2), 78-94. Schopper, D., S. Doussantousse and J. Orav. 1993. Sexual behaviors relevant to HIV transmission in a rural African population: How much can a KAP survey tell us? Social Science & Medicine 37(3), 401-412. Soldan, V. A. P. 2004. How family planning ideas are spread within social groups in rural Malawi. Studies in Family Planning 35(4), 275-290. Smith, H. L. 1993. On the limited utility of KAP-style survey data in the practical epidemiology of AIDS, with reference to the AIDS epidemic in Chile. Health Transition Review 3(1), 1-15. Stone, L. and J. G. Campbell. 1984. The use and misuse of surveys in international development: An experiment from Nepal. Human Organization 43(1), 27-34. TUMCHP. 2005. Unpublished meeting report, distributed via email 13 July 2005. Department of International Health, University of Tampere Medical School, Finland. Winch, P.J., A. M. Makemba, S. R. Kamazima, M. Lurie, G. K. Lwihula, Z. Premji, J. N. Minjas and C. J. Shiff. 1996. Local terminology for febrile illnesses in Bagamoyo District, Tanzania and its impact on the design of a community-based malaria control programme. Social Science and Medicine 42(7), 1057-1067.

Yoder, P. S. 1997. Negotiating relevance: Beliefs, knowledge and practice in international health projects. Medical Anthropology Quarterly 11(2), 131-146

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close