How will you analyse interview and observation based data? - TopicsExpress



          

How will you analyse interview and observation based data? Discuss in detail. Answer: Analyse interview and observation based data Analyse interview based Data Data analysis, whether qualitative or quantitative, requires a researcher to identify patterns and themes in the collected study data. This is a complex task, especially with qualitative data, which are non-numeric, usually in a textual or narrative form. Making sense of a mass of qualitative data is a fascinating, but time-consuming, process. A smart research strategy can help you bring order to your data without getting bogged down in the process. Interviews can be used to collect facts, eg information about peoples place of work, age, etc., but such questions are usually no more than opening items which precede the main substance. The bulk of interview questions seek to elicit information about attitudes and opinions, perspectives and meanings, the very stuff of much of both psychology and sociology. Interviews are also in common use as a means of selection - for entry to school or college, getting a job or obtaining promotion. They are widely used because they are a powerful means of both obtaining information and gaining insights. We use them because they give us an idea of what makes people tick, of the personality and the motivations of the interviewee. 1. Interviews are available in a range of styles, some of which are pre-packed and mass marketed so they can be more or less picked off the shelf. If you have ever been stopped in the high street to be quizzed about your use of toiletries, youll know what a closed-ended, structured interview feels like on the receiving end. Social scientists make similar use of tightly controlled pre-set interviews which have been piloted on sample groups to test their efficiency and accuracy before being tried out on larger populations. 2. These structured interviews in their simplest form are sometimes little more than oral questionnaires - used instead of the written form in order to obtain a higher response rate or with respondents, especially children, who might not be literate or capable of correctly completing a complex questionnaire. 3. At the opposite extreme in interview design are completely unstructured conversations between researcher and respondent, where the latter has as much influence over the course of the interview as the former. 4. There is a half-way house, where the researcher designs a set of key questions to be raised before the interview takes place, but builds in considerable flexibility about how and when these issues are raised and allows for a considerable amount of additional topics to be built in in response to the dynamics of conversational exchange. These are known as semi-structured interviews. They are the form most often used in education research. The Daily Interpretive Analysis Perhaps one of the most critical – and undoubtedly one of the most painful – aspects of a qualitative methodology is the need for a designated team member to write a “Daily Interpretive Analysis.” The objective of the DIA is to assemble and interpret the information that was collected. In other words, at the end of every day of interviewing, it is essential for a selected team member to review the notes and the tapes and to write a report that summarizes and interprets the information obtained. In order to justify the importance of the DIA, and to further explain its purpose and method, consider the following questions: a. Why do we need DIAs? One way to answer this question is to contrast our methodology with the traditional survey approach. When a standardized survey questionnaire is used, it is possible to code the responses to each item in the questionnaire at the end of the field work. Since the answers are standardized, the level of ambiguity is low, and it is not necessary to remember the particular context of the interview, or other things that the respondent may have said. We can say that the information is not fragile. By that we mean that the integrity of the data is not necessarily threatened by the passage of time. It is also the case that the analysis of the data does not take place in the field, but only begins after the data have been coded. In our case, the kind of data recorded on tape and in hand written notes provides an informational base that is extremely fragile. It is fragile because, as time passes, it becomes increasingly difficult (probably at an exponential rate) to reconstruct information. This is especially true with respect to the insights that you may have when listening to the respondent, or with respect to important relationships or connections that the respondent may express.3 After a few days it becomes very difficult to fully interpret handwritten notes that were taken, even in the case of very good notes. Similarly, there are flashes of insight that you may have when you listen to a respondent that may seem self-evident at the time, and may appear to be something that you will not forget. The problem is that such insights turn out to be very hard to remember. As time passes, they loose a great deal of detail and nuance. The same principle applies to taped responses. Although it is tempting to think that a tape recording of an interview will faithfully recreate all of the information, this is not the case. A tape may be good at documenting factual data that you can retrieve by listening to them at a later time. However, a great deal of understanding comes from the context of the interview, and from a range of cues that are simply not captured on tape. And, more importantly, the moments of inspiration and clarity that you may experience during an interview are not likely to be re-created when you listen to a tape weeks or months later. Finally, the strategy of drawing information from tape recordings at a later date has the inescapable disadvantage of forcing you to listen to tapes in their entirety, and probably more than once. This means that an interview that lasted three hours will take another three hours (or more) if you listen to it later on. The effect is to double the actual time spent on field work. b. How do we know the insights and conclusions are valid, especially when we are early in the process, long before the study is completed? The Interactive Interview is based on the assumption that the discussions with the respondents is a learning process that continues throughout the field work, from the first day to the last. Hence, it is very important to take the time to assemble the information received and to interpret and analyze the data as you go along. The very act of writing the DIA is, itself, a potentially valuable learning tool because it forces you to reflect on what has been learned. Such reflections will positively inform subsequent interviews that are carried out. As noted earlier, one of the functions of the DIA is to document those flashes of insight, or preliminary conclusions, that you may have when you listen to a respondent. The tapes and notes are analyzed to show the dynamic interrelatedness of the various pieces of information that the respondent presents. The respondent’s discussion is therefore much more than a collection of “reality reports,” that simply require the interviewer to list. While concrete informational items are critically important, so to is the ways in which the respondent assembles aspects of his/her reality. Writing the DIA is therefore an analytically active enterprise, one of the functions of which is to take the respondents’ discussion and interpret it with reference to the larger concerns of the project. The insights or conclusions on any given day necessarily have a “provisional status.” They are much like “working hypotheses” or “preliminary theories.” Subsequent interviews and DIAs may lead you to alter, refine, or even reject some conclusions or understandings that you may have had earlier in the process. Similarly, you may use the DIA to record an idea that is initially expressed as a vague notion, or a hunch, and later turns out to be one of major insights of the entire project. The only way to document and to keep track of these ideas is through the DIA. c. How much information should be in the DIA? There is no single answer to this questions, but there are several important guidelines. For example, the DIA should be self contained. In other words, it should contain all of the information that the interviewer deems important. The author should try to write the DIA in such a way that it will not be necessary to go back to the tapes or notes at a later date. It should also be presented in such a way that (unlike field notes or tapes), the report can be read at a later date, and can be understood, even by persons who did not write the DIA. The DIA can contain direct quotes from the respondent. If a respondent is particularly eloquent, or particularly insightful, or expresses an especially useful idea or relationship, it would be extremely valuable to include direct quotes in the DIA. Having the tapes to turn to will be helpful for this purpose. Finally, it is important to keep in mind that a key aspect of the research design is the comparative feature of the study (between large and small landholders; and between regions within Brazil, and between Brazil, Ecuador and Peru). When we get the point when we begin to carry out a comparative assessment of the data collected within different groups, and across different geographic sites, it could turn out that a critical observation is based on the absence of data. For example, consider a situation in which small farmers in place A consistently mention X, Y and Z as very important variables, but none of the small producers in place B do so. This will certainly alert us to fundamental differences between the two places. It also means that, if at all possible, the DIA’s should be sensitive, not only to what is said, but also to what is not. The latter, of course, becomes quite difficult to do for the simple reason that, at the outset of the study, we will not have a complete list of items to refer to. A provision checklist may help in this regard. d. How can we expect to complete all the interviews needed if we have to spend so much time in the field writing the DIAs? Like all hard decisions, there will be a trade-off between spending time doing a DIA or another interview. When making this choice, one thing that should be kept in mind is the fact that another interview without a DIA not likely to be very useful in the long run. This observation underscores the importance of the DIA. At the same time, it is important to complete as many interviews as it takes to address the questions. The challenge, therefore, will be to complete a sufficient number of interviews that meet the objectives of the project and, at the same time, complete the DIAs that will make the subsequent analyses possible. e. If you have ever done interviews in rural Amazonia, you know how tiring the process can be. At the end of the day, about all most people are up to is finding a beer. Is it realistic to expect someone to write such a report at the end of every day? Allowing time for the DIA should be something that is explicitly built in to the field work strategy. It may take place at the end of day that is scheduled to end early. Alternatively, it may be preferable to spend the morning of the following day writing the DIA, or to accumulate 2 or 3 interviews before sitting down to write the report. Whatever strategy seems best at the time, in no case should the DIAs be postponed for more than a few days. Format of the Daily Interpretive Analysis Because there will be several teams in the field at the same time, and anticipating the need to carry out a comparative analysis, it is important that the DIA’s follow a common format. The format will consist of at least three parts: (1) Record, (2) Analysis, and (3) Conclusions /Concerns. Written notes taken during the interview and tape recordings of what was said will provide the raw materials for the report. In addition, a group meeting of the team may be an efficient way to benefit from the insights and observations of other participants in the interview who, because of their different areas of expertise, are likely to bring different interpretations into play. Similarly, a meeting of the team to discuss a draft of the DIA may play a similar positive role. These suggestion emphasize the idea that even if only one person actually writes the DIA, it is understood that the report is product of a group effort. The report must be sufficiently complete, and written with sufficient clarity, that individuals who did not participate in the interview session can read and understand the document in the manner that the author intends. 1. Record (Relato): The first section is a written rendition of the information that was provided by the informant in the process of the interactive interview. The objective is to construct as complete a record as possible of what the informant said. The challenge is to record the information in a way that remains faithful to the informant’s actual thoughts and words. It may be very useful to make use of the rape recordings in to review parts of the interview, and to clarify ambiguities that may be present in the written notes. The tapes may also be helpful in order to reproduce direct quotes that are particularly revealing about the informant’s perspective, or that are of special relevance to the project objectives. 2. Analysis: Using the Record as a reference point, the second part of the DIA provides an analysis of the information. By analysis we mean interpreting the information provided by the informant and relating it to the main objectives of the study. In contrast to the Record, the Analysis section requires the active involvement of the analyst who is expected to reorganize the information, and interpret the interviews in meaningful ways. This may entail drawing connections between different ideas or processes that were mentioned in the interactive interview, even if these connections themselves were not explicitly noted by the informant. Similarly, this may involve identifying patterns of associations between variables, or noting the relationship between what an informant said and events that take place at the regional, national, or global levels. The objective of the Analysis Section of the DIA can be summarized as an attempt to interpret the content of the interactive interview in a way that relates the finds to the objectives of the project. A useful way to keep the various objectives in mind, is to keep in front of you accomplish this in the Analysis section of the DIA, it may be helpful to keep in front of you the various Figures – and any other materials that summarize the main concepts, relationships, and hypotheses that are central to this study. The Analysis Section of the DIA is explicitly intended to be a creative exercise that draws heavily on your background knowledge, your ability to listen with an open mind, and your capacity to link specific observations and particular pieces of information to more general concepts and relationships. 3. Conclusions and Concerns: In this section you are encouraged to go beyond the Record and Analysis to draw more general conclusions that you think may be warranted. Statements in the Conclusions and Concerns Section can be thought of as “working hypotheses” or “preliminary conclusions” that are worth keeping track of as the field work proceeds. The idea is to provide a place in the interpretation format for members of the team to capture and record those insights and revelations that so often come about in the process of interviewing people (but which are rarely recording as such). Observations in the C&C Section may have the important function of informing or revising the kinds of questions that are asked in subsequent interviews. In this way the observations C&C observations can serve to incorporate a learning process into the ongoing field work. This section of the DIA is also the place to record the Concerns that team members may have. For example, as the interviews proceed, it may become increasingly evident that some questions and scenarios do not work, that some of the ways we have conceptualized the issues may be flawed, or that important considerations are being excluded. Keeping a record of these concerns will provide important information for assessing the success of the project. These observations may also alert the team to ways to revise and improve the interviews, and the process of data collection and analysis. Finally, it is essential that the report include information about the interview, and sufficient information to link the report to the field notes that were taken, and the tape recordings that were made. It is very important that this be done in such a way that the information cannot be associated with named individuals, and that the privacy of the informants is protected. Suggested content: 1. Date ____________ 2. Time ____________ 3. Location ____________ 4. Team members present ____________ 5. Interview Id # ____________ (same # on associated notes and tapes) 6. Author of the report ____________ Analyzing interview data Formulating central questions or hypotheses, conducting interviews, and analyzing interview data are part of a reciprocating process. For interviews, data analysis begins after the first few interviews and shapes subsequent data gathering. Early interviews will influence the questions and content of subsequent interviews. Bogdan and Biklin (1998) provide practical steps to guide you through this process: While gathering data Refine your focus After a few initial interviews, narrow the scope of data collection. Guided by central questions or hypotheses, decide if you want to focus on minute details of interactions or general processes. Referring to models from similar research can help guide your work. Reassess central questions Based on initial interviews, determine if central questions are still relevant. For example, imagine you begin an evaluation of an academic skills-building program for students who are not fully prepared for college. An initial question you and program administrators agree on is: What is the process by which students build skills that prepare them for college courses? After three interviews, however, you realize that most students in the program are already prepared for college, and most program activities are unrelated to building academic skills. You might replace your initial question with a new one, What benefits do students derive from the program? Although mid-course adjustments are sometimes needed, in most cases you should adhere to the evaluations purpose, continually assessing if data collection and analysis are answering central questions. Transcribe the interview Consider the following questions when transcribing data: • Is special formatting needed to meet the requirements of qualitative analysis software? • Will the transcription be verbatim (every utterance recorded) or only include complete thoughts and useful information? • How will background noises, interruptions, and silences be recorded, if at all? • How will non-standard grammar, slang, and dialects be recorded? If you hire a transcriber, explain how to format documents following your transcription rules. Be sure to check the transcription against the audiotape for accuracy. Providing transcribers with your interview questions is also helpful. Plan future interviews based on your early interviews Transcribe interviews quickly so you can resolve ambiguities while the interview is still fresh. Review your notes and interview transcripts to refine your questions or add new questions based on emerging topics. Ask yourself, What do I still need to know or confirm? Record insights and summarize your reflections after each interview When you have important realizations during interviews, write them down as soon as possible. After every three or four interviews, read over your interview notes and write a one- or two-page summary of themes you are noticing and questions you have. Note and follow up any unexpected data, making sure to interview extreme cases-participants who have had very positive or very negative experiences. It can be helpful, once themes emerge, to express a key observation you have to future respondents to get their viewpoint. After data collection Develop coding categories. A major step in analyzing qualitative data is coding speech into meaningful categories, enabling you to organize large amounts of text and discover patterns that would be difficult to detect by just listening to a tape or reading a transcript. Always keep an original copy of your transcripts. Bogdan and Biklin (1998) suggest first ordering interview transcripts and other information chronologically or by some other criteria. Carefully read all your data at least twice during long, undisturbed periods. Next, conduct initial coding by generating numerous category codes as you read responses, labeling data that are related without worrying about the variety of categories. Write notes to yourself, listing ideas or diagramming relationships you notice, and watch for special vocabulary that respondents use because it often indicates an important topic. Because codes are not always mutually exclusive, a piece of text might be assigned several codes. Last, use focused coding to eliminate, combine, or subdivide coding categories and look for repeating ideas and larger themes that connect codes. Repeating ideas are the same idea expressed by different respondents, while a theme is a larger topic that organizes or connects a group of repeating ideas. Try to limit final codes to between 30 and 50. After you have developed coding categories, make a list that assigns each code an abbreviation and description. [more] Berkowitz (1997) suggests considering six questions when coding qualitative data: • What common themes emerge in responses about specific topics? How do these patterns (or lack thereof) help to illuminate the broader central question(s) or hypotheses? • Are there deviations from these patterns? If so, are there any factors that might explain these deviations? • How are participants environments or past experiences related to their behavior and attitudes? • What interesting stories emerge from the responses? How do they help illuminate the central question(s) or hypotheses? • Do any of these patterns suggest that additional data may be needed? Do any of the central questions or hypotheses need to be revised? • Are the patterns that emerge similar to the findings of other studies on the same topic? If not, what might explain these discrepancies? Bogdan and Biklin (1998) provide common types of coding categories, but emphasize that your central questions or hypotheses shape your coding scheme. • Setting/Context codes provide background information on the setting, topic, or subjects. • Defining the situation codes categorize the world view of respondents and how they see themselves in relation to a setting or your topic. • Respondent perspective codes capture how respondents define a particular aspect of a setting. These perspectives may be summed up in phrases they use, such as, Say what you mean, but dont say it mean. • Respondents ways of thinking about people and objects codes capture how they categorize and view each other, outsiders, and objects. For example, a dean at a private school may categorize students: There are crackerjack kids and there are junk kids. • Process codes categorize sequences of events and changes over times. • Activity codes identify recurring informal and formal types of behavior. • Event codes, in contrast, are directed at infrequent or unique happenings in the setting or lives of respondents. • Strategy codes relate to ways people accomplish things, such as how instructors maintain students attention during lectures. • Relationship and social structure codes tell you about alliances, friendships, and adversaries as well as about more formally defined relations such as social roles. • Method codes identify your research approaches, procedures, dilemmas, and breakthroughs. Analysis of Observation Data Once observations are underway, the team begins the process of data gathering and analysis. The process Design Team usually carries this forward as a Steering Committee for at least the first few months. This gives the process designers feedback on how their design is working and helps them: • Identify where they might improve the system, • Develop action plans for addressing major concerns, and • Evaluate their initial celebration plans. Data tracking and analysis is a key element of behavioral safety and one of the most challenging tasks for a young Steering Committee. The data should therefore be tracked and reviewed at least monthly and portrayed with simple graphs or bar charts that might include: • number of observations conducted (by area or department) • percentage if employees conducting observations • percentage and number of concerns by category • Content of comments (for clarification and detail) Graphing data weekly or monthly allows the Steering Committee to look at both process and safety issues. The Committee can establish and maintain recognition and celebration targets for individual and group involvement with data on participation. Achieving maximal participation is a major key to success of Behavioral Safety Processes and requires considerable attention throughout the life of the process. The Team has to establish milestones for celebration of participation while taking care to avoid reinforcement that might encourage employees to submit falsified observation forms. Tracking Safe Concerns on bar charts allows problem solving of areas where concerns are frequent and/or high-risk as well as tracking improvements. The Steering Committee closes the “feedback loop” to employees by sharing these summary results plant-wide each month in safety meetings. In this way, they encourage further problem solving and involvement of area teams action planning for continuous improvement. In addition, the Steering Committee develops action plans to address the primary concerns. These action plans will often include engineering and maintenance to address the physical conditions that contribute to unsafe acts. The ultimate test of any safety effort is to make sure the process is indeed having an impact on safety. As the process matures, the Steering Committee must compare the observation data to actual site safety results to make sure observers are truly looking at the behaviors contributing to accidents and injuries. This provides a check on the accuracy and quality of the observations. The observations and feedback are instrumental in achieving the initial safety improvements that result from behavioral change. The Steering Committee’s ongoing use of the observation data to maintain the process, and the implementation of action items to address concerns, are the keys to the longer term success and continual improvement. During direct observation it is common for an observer to be present who sits passively and records as accurately as possible what is going on. Usually it is the behaviour of one or more persons that is recorded, and an advantage of the technique is that a number of people interacting with each other and the same piece of equipment can be observed. A variation on this technique is to have a video camera mounted at the point of usage, which re c o rds interactions which can later be watched and analysed by an observer. The observation can be totally “free” or more structured i.e. where observers record events as belonging to one of a number of discrete categories identified. The number of categories adopted largely depends on what the o b s e rvers intend the data will be used for, and very broad categories may be used for some studies, whilst detailed categories will be used for others. In some investigations a more free approach may be used where the o b s e rver re c o rds all of their impressions during observation rather than trying to group them in some way. However this introduces a high degree of subjectivity into the evaluation process, and in practice it is usually better to try and define the categories of behaviour that will be observ e d . One way of achieving this is to perf o rm a pilot study where free re c o rd i n g takes place, and then to use the results of this to identify relevant categories for use in a wider study, and to define clearly the criteria to be applied by observers in putting observed behaviour into particular categories e.g. types of errors made. The degree of stru c t u re is related to the “objectivity” of the method, as less stru c t u re may result in observations that are more the result of the observers point of view than of the users behaviour, and in addition can make it difficult to make comparisons when more than one observer is used. Where more than one observer is used it is parrticularly important to ensure that all observers are in agreement as to what they are re c o rding and the criteria they are using. The data captured during direct observation can include objective as well as subjective information, as it is possible for observers to accurately re c o rd the amount of time taken to perf o rm particular activities and the errors that they make in use. However more subjective information can also be valuable, e.g. any anxiety or frustration observed, and the observers impressions of the state of mind of the user. Data analysis T h e re are several ways to treat raw data from direct observational studies. The simplest way is to count frequencies and duration’s of diff e rent categories of activity. These categories are often the same as those used to focus the data capture, but may also be inferred from any wider descriptions re c o rded. In the previous example “Utterances about time re f e rring to an event rather than to the clock”, might be such an inferred category. In summarising complex data there is always the danger of missing valuable information, and that often more full accounts of interactions can p rovide insights that are missing from summaries. For this reason many re s e a rchers prefer to capture more information than they believe they will need. Thus if data analysis suggests some interesting aspects of the interactions that are worthy of further investigation, it is possible to go back to the data and re analyse it with such changes in perspective in mind. Summary In observational research, reliability of data refers tothe degree of agreement between sets of observational data collected independently from the same scene by two different observers (interobserver agreement) or by the same observer at different times in the data collection process (intraobserver agreement). Reliable data are a first prerequisite for answering research questions. It is important to determine whether data sets that are collected by different observers or at different times differ so little that one can safely assume that they are equally valid for analysis. Reliability analysis is also used to train new observers in coding schemes and observational data collection—for example, by testing their data against a reference data set, both collected from the same video recording.
Posted on: Mon, 08 Dec 2014 14:33:02 +0000

Trending Topics



Recently Viewed Topics




© 2015