This process is almost the same for any kind of empirical study; compare e. However, as case study methodology is a flexible design strategy, there is a significant amount of iteration over the steps (Andersson and Runeson 2007b).
The data collection and analysis may be conducted incrementally. If insufficient data is collected for the analysis, more data collection may be planned etc.
However, there is a limit to the flexibility; the case study should have specific objectives set out from the beginning. If the objectives change, it is a new case study rather than a change to the existing one, though this is a matter of judgment as all other classifications.
Eisenhardt adds two steps between 4 and 5 above in her process for building theories from case study research (Eisenhardt 1989) a) shaping hypotheses and b) enfolding literature, while the rest except for terminological variations are the same as above. 5 DefinitionsIn this paper, we use the following terminology. The overall objective is a statement of what is expected to be achieved in the case study.
Others may use goals, aims or purposes as synonyms or hyponyms for objective. The objective is refined into a set of research questions, which are to be answered through the case study analysis.
A case may be based on a software engineering theory. It is beyond the scope of this article to discuss in detail what is meant by a theory.
, describe a framework for theories including constructs of interest, relations between constructs, explanations to the relations, and scope of the theory (Sj berg et al.
With this way of describing theories, software engineering theories include at least one construct from software engineering.
A research question may be related to a hypothesis (sometimes called a proposition (Yin 2003)), i. a supposed explanation for an aspect of the phenomenon under study. Hypotheses may alternatively be generated from the case study for further research.
The case is referred to as the object of the study (e. a project), and it contains one or more units of analysis (e. Data is collected from the subjects of the study, i.
Data may be quantitative (numbers, measurements) or protocol defines the detailed procedures for collection and analysis of the raw data, sometimes called field procedures. The guidelines for conducting case studies presented below are organized according to this process.
Section 3 is about setting up goals for the case study and preparing for data collection, Section 4 discusses collection of data, Section 5 discusses data analysis and Section 6 provides some guidelines for reporting. 1 Defining a CaseCase study research is of flexible type, as mentioned before. On the contrary, good planning for a case study is crucial for its success. There are several issues that need to be planned, such as what methods to use for data collection, what departments of an organization to visit, what documents to read, which persons to interview, how often interviews should be conducted, etc.
These plans can be formulated in a case study protocol, see Section 3. A plan for a case study should at least contain the following elements (Robson 2002): Objective—what to achieve?Theory—frame of referenceMethods—how to collect data?Selection strategy—where to seek data?The objective of the study may be, for example, exploratory, descriptive, explanatory, or improving. The objective is naturally more generally formulated and less precise than in fixed research designs.
The objective is initially more like a focus point which evolves during the study. The research questions state what is needed to know in order to fulfill the objective of the study.
Similar to the objective, the research questions evolve during the study and are narrowed to specific research questions during the study iterations (Andersson and Runeson 2007b). The case may in general be virtually anything which is a “contemporary phenomenon in its real-life context” (Yin 2003).
In software engineering, the case may be a software development project, which is the most straightforward choice. It may alternatively be an individual, a group of people, a process, a product, a policy, a role in the organization, an event, a technology, etc.
may also constitute a unit of analysis within a case.
In the information systems field, the case may be “individuals, groups…or an entire organization. Alternatively, the unit of analysis may be a specific project or decision”(Benbasat et al.
Studies on “toy programs” or similarly are of course excluded due to its lack of real-life context.
Yin (2003) distinguishes between holistic case studies, where the case is studied as a whole, and embedded case studies where multiple units of analysis are studied within a case, see Fig. Whether to define a study consisting of two cases as holistic or embedded depends on what we define as the context and research goals. In our XP example, two projects are studied in two different companies in two different application domains, both using agile practices (Karlstr m and Runeson 2006). The projects may be considered two units of analysis in an embedded case study if the context is software companies in general and the research goal is to study agile practices.
On the contrary, if the context is considered being the specific company or application domain, they have to be seen as two separate holistic cases. comment on a specific case study, “Even though this study appeared to be a single-case, embedded unit analysis, it could be considered a multiple-case design, due to the centralized nature of the sites. FeedbackSubjects and organizations must explicitly agree to participate in the case study, i.
In some countries, this is even legally required. It may be tempting for the researcher to collect data e.
through indirect or independent data collection methods, without asking for consent.
However, the ethical standards must be maintained for the long term trust in software engineering research. Legislation of research ethics differs between countries and continents.
In many countries it is mandatory to have the study proposal reviewed and accepted with respect to ethical issues (Seaman 1999) by a review board or a similar function at a university. Even if there are no such rules, it is recommended that the case study protocol is reviewed by colleagues to help avoiding pitfalls. Consent agreements are preferably handled through a form or contract between the researchers and the individual participant, see e.
In an empirical study conduced by the authors of this paper, the following information were included in this kind of form: Names of researchers and contact information. a short description of what the participant should do during the study and what steps the researcher will carry out during these activities. A text clearly stating that the participation is voluntary, and that collected data will be anonymous.
A list of benefits for the participants, in this case for example experience from using a new technique and feedback effectiveness.
A description of how confidentiality will be assured. This includes a description of how collected material will be coded and identified in the study.
Date and signatures from participant and researchers.
If the researchers intend to use the data for other, not yet defined purposes, this should be signed separately to allow participants to choose if their contribution is for the current study only, or for possible future studies. Issues on confidentiality and publication should also be regulated in a contract between the researcher and the studied organization.
However, not only can information be sensitive when leaking outside a company. Data collected from and opinions stated by individual employees may be sensitive if presented e.
The researchers must have the right to keep their integrity and adhere to agreed procedures in this kind of cases. Companies may not know academic practices for publication and dissemination, and must hence be explicitly informed about those.
From a publication point of view, the relevant data to publish is rarely sensitive to the company since data may be made anonymous. However, it is important to remember that it is not always sufficient to remove names of companies or individuals.
They may be identified by their characteristics if they are selected from a small set of people or companies. by revealing deficiencies in their software engineering practices, or if their product comes out last in a comparison (Amschler Andrews and Pradhan 2001).
The chance that this may occur must be discussed upfront and made clear to the participants of the case study. In case violations of the law are identified during the case study, these must be reported, even though “whistle-blowers” rarely are rewarded.
The inducements for individuals and organizations to participate in a case study vary, but there are always some kinds of incentives, tangible or intangible. It is preferable to make the inducements explicit, i.
specify what the incentives are for the participants.
Thereby the inducement’s role in threatening the validity of the study may also be analyzed. Giving feedback to the participants of a study is critical for the long term trust and for the validity of the research.
Firstly, transcript of interviews and observations should be sent back to the participants to enable correction of raw data. Secondly, analyses should be presented to them in order to maintain their trust in the research.
Participants must not necessarily agree in the outcome of the analysis, but feeding back the analysis results increases the validity of the study. In all three example studies issues of confidentiality were handled through Non-Disclosure Agreements and general project cooperation agreements between the companies and the university, lasting longer than one case study.
These agreements state that the university researchers are obliged to have publications approved by representatives of the companies before they are published, and that raw data must not be spread to any but those signing the contract. The researchers are not obliged to report their sources of facts to management, unless it is found that a law is violated. In order to ensure that interviewees were not cited wrongly, it was agreed that the transcribed interviews were sent back to them for review in the XP study.
In the beginning of each interview, interviewees were informed about their rights in the study.
In study QA, feedback meetings for analysis and interpretation were explicitly a part of the methodology ((Andersson and Runeson 2007b) Fig Library database of key engineering failures and case studies. The engineering field needs a comprehensive and authoritative resource to is available worldwide through annual subscription or one-time purchase of perpetual rights..
When negotiating publication of data, we were explicitly told that raw numbers of defects could not be published, but percentages over phases could, which was acceptable for the research purposes.
All the three studies were conducted in Sweden, where only studies in medicine are explicitly regulated by law; hence there was no approval of the studies by a review board beforehand. Are clear objectives, preliminary research questions, hypotheses (if any) defined in advance?3. Is the theoretical basis—relation to existing literature or other cases—defined?4.
Are the authors’ intentions with the research made clear?5. Is the case adequately defined (size, domain, process, subjects…)?6.
Is a cause–effect relation under study? If yes, is it possible to distinguish the cause from other factors using the proposed design?7. Does the design involve data from multiple sources (data triangulation), using multiple methods (method triangulation)?8.
Is there a rationale behind the selection of subjects, roles, artifacts, viewpoints, etc. Is the specified case relevant to validly address the research questions (construct validity)?10. Is the integrity of individuals/organizations taken into account?4 Collecting Data4.
1 Different Data SourcesThere are several different sources of information that can be used in a case study. It is important to use several data sources in a case study in order to limit the effects of one interpretation of one single data source.
If the same conclusion can be drawn from several sources of information, i. 2), this conclusion is stronger than a conclusion based a single source.
In a case study it is also important to take into account viewpoints of different roles, and to investigate differences, for example, between different projects and products. Commonly, conclusions are drawn by analyzing differences between data sources.
(2005) data collection techniques can be divided into three levels: First degree: Direct methods means that the researcher is in direct contact with the subjects and collect data in real time.
This is the case with, for example interviews, focus groups, Delphi surveys (Dalkey and Helmer 1963), and observations with “think aloud protocols”. Second degree: Indirect methods where the researcher directly collects raw data without actually interacting with the subjects during the data collection.
This approach is, for example taken in Software Project Telemetry (Johnson et al. 2005) where the usage of software engineering tools is automatically monitored, and observed through video recording.
Third degree: Independent analysis of work artifacts where already available and sometimes compiled data is used. This is for example the case when documents such as requirements specifications and failure reports from an organization are analyzed or when data from organizational databases such as time accounting is analyzed.
First degree methods are mostly more expensive to apply than second or third degree methods, since they require significant effort both from the researcher and the subjects. An advantage of first and second degree methods is that the researcher can to a large extent exactly control what data is collected, how it is collected, in what form the data is collected, which the context is etc.
Third degree methods are mostly less expensive, but they do not offer the same control to the researcher; hence the quality of the data is not under control either, neither regarding the original data quality nor its use for the case study purpose. In many cases the researcher must, to some extent, base the details of the data collection on what data is available.
For third degree methods it should also be noticed that the data has been collected and recorded for another purpose than that of the research study, contrary to general metrics guidelines (van Solingen and Berghout 1999). It is not certain that requirements on data validity and completeness were the same when the data was collected as they are in the research study.
5 we discuss specific data collection methods, where we have found interviews, observations, archival data and metrics being applicable to software engineering case studies (Benbasat et al. In study XP data is collected mainly through interviews, i. The evaluation of a proposed method in study RE involves filling out a form for prioritization of requirements.
In study QA stored data in the form defect reporting metrics were used as a major source of data, i.
All studies also included one or several feedback steps where the organizations gave feedback on the results, i. These data were complemented with second or third degree data, e.
2 InterviewsData collection through interviews is important in case studies.
In interview-based data collection, the researcher asks a series of questions to a set of subjects about the areas of interest in the case study In an effort to improve the quality of project applications, engineering case studies have been order to determine the properties' Fair Market Value (FMV)..
In most cases one interview is conducted with every single subject, but it is possible to conduct group-interviews. The dialogue between the researcher and the subject(s) is guided by a set of interview questions.
The interview questions are based on the topic of interest in the case study. That is, the interview questions are based on the formulated research questions (but they are of course not formulated in the same way).
allowing and inviting a broad range of answers and issues from the interviewed subject, or closed offering a limited set of alternative answers. Interviews can, for example, be divided into unstructured, semi-structured and fully structured interviews (Robson 2002).
In an unstructured interview, the interview questions are formulated as general concerns and interests from the researcher. In this case the interview conversation will develop based on the interest of the subject and the researcher.
In a fully structured interview all questions are planned in advance and all questions are asked in the same order as in the plan. In many ways, a fully structured interview is similar to a questionnaire-based survey.
In a semi-structured interview, questions are planned, but they are not necessarily asked in the same order as they are listed. The development of the conversation in the interview can decide which order the different questions are handled, and the researcher can use the list of questions to be certain that all questions are handled.
Additionally, semi-structured interviews allow for improvisation and exploration of the studied objects. Semi-structured interviews are common in case studies.
The different types of interviews are summarized in Table 4. Table 4 Case 3Case 4Observations according to case 1 or case 2 are typically conducted in action research or classical ethnographic studies where the researcher is part of the team, and not only seen as a researcher by the other team members.
The difference between case 1 and case 2 is that in case 1 the researcher is seen as an “observing participant” by the other subjects, while she is more seen as a “normal participant” in case 2. In case 3 the researcher is seen only as a researcher.
The approaches for observation typically include observations with first degree data collection techniques, such as a “think aloud” protocol as described above. In case 4 the subjects are typically observed with a second degree technique such as video recording (sometimes called video ethnography).
An advantage of observations is that they may provide a deep understanding of the phenomenon that is studied. Further, it is particularly relevant to use observations, where it is suspected that there is a deviation between an “official” view of matters and the “real” case (Robinson et al.
It should however be noted that it produces a substantial amount of data which makes the analysis time consuming.
In the three example studies no extensive observations, e. through video recording or think-aloud procedures, were conducted. In a study, related to the XP study, Sharp and Robinson use observations (Sharp and Robinson 2004).
The observer spent 1 week with an XP team, taking part in everyday activities, including pair programming, i. Data collected consisted of field notes, audio recordings of meetings and discussions, photographs and copies of artifacts.
4 Archival DataArchival data refers to, for example, meeting minutes, documents from different development phases, organizational charts, financial records, and previously collected measurements in an organization.
(1987) and Yin (2003) distinguish between documentation and archival records, while we treat them together and see the borderline rather between qualitative data (minutes, documents, charts) and quantitative data (records, metrics), the latter discussed in Section 4.
Archival data is a third degree type of data that can be collected in a case study.
For this type of data a configuration management tool is an important source, since it enables the collection of a number of different documents and different versions of documents. As for other third degree data sources it is important to keep in mind that the documents were not originally developed with the intention to provide data to research in a case study.
A document may, for example, include parts that are mandatory according to an organizational template but of lower interest for the project, which may affect the quality of that part. It should also be noted that it is possible that some information that is needed by the researcher may be missing, which means that archival data analysis must be combined with other data collection techniques, e.
surveys, in order to obtain missing historical factual data (Flynn et al.
It is of course hard for the researcher to assess the quality of the data, although some information can be obtained by investigating the purpose of the original data collection, and by interviewing relevant people in the organization.
In study QA, archival data was a major source of information. Three different projects from one organization were studied.
One of the projects was conducted prior to the study, which meant that the data from this project was analyzed in retrospect. We studied process models as well as project specifications and reports.
In study XP, archival data in the form of process models were used as complementary sources of information. 5 MetricsThe above mentioned data collection techniques are mostly focused on qualitative data. However, quantitative data is also important in a case study.
Software measurement is the process of representing software entities, like processes, products, and resources, in quantitative numbers (Fenton and Pfleeger 1996). Collected data can either be defined and collected for the purpose of the case study, or already available data can be used in a case study. The first case gives, of course, most flexibility and the data that is most suitable for the research questions under investigation.
The definition of what data to collect should be based on a goal-oriented measurement technique, such as the Goal Question Metric method (GQM) (Basili and Weiss 1984; van Solingen and Berghout 1999).
Case studies - engineering communication program | engineering
This means that metrics are derived based on goals that are formulated for the measurement activity, and thus that relevant metrics are collected Request Article PDF | Use of a Case Study Approach to Teach Engineering The need for an instructional model based on engineering case studies and the .
It also implies that the researcher can control the quality of the collected data and that no unnecessary data is collected.
Examples of already available data are effort data from older projects, sales figures of products, metrics of product quality in terms of failures etc. This kind of data may, for example, be available in a metrics database in an organization.
When this kind of data is used it should be noticed that all the problems are apparent that otherwise are solved with a goal oriented measurement approach. The researcher can neither control nor assess the quality of the data, since it was collected for another purpose, and as for other forms of archival analysis there is a risk of missing important data.
The archival data in study QA was mainly in the form of metrics collected from defect reporting and configuration management systems but also from project specifications. Examples of metrics that were collected are number of faults in modules, size of modules and duration for different test phases.
In study XP, defect metrics were used as complementary data for triangulation purposes. 6 ChecklistsThe checklist items for preparation and conduct of data collection are shown in Tables 6 and 7, respectively. Table 6 Preparation for data collection checklist items11.
Is a case study protocol for data collection and analysis derived (what, why, how, when)? Are procedures for its update defined?12. Are multiple data sources and collection methods planned (triangulation)?13.
Are measurement instruments and procedures well defined (measurement definitions, interview questions)?14. Are the planned methods and measurements sufficient to fulfill the objective of the study?15.
Is the study design approved by a review board, and has informed consent obtained from individuals and organizations?Table 7 16. Is data collected according to the case study protocol?17.
Is the observed phenomenon correctly implemented (e. to what extent is a design method under study actually used)?18. Are sensitive results identified (for individuals, the organization or the project)?20. Are the data collection procedures well traceable?21.
Does the collected data provide ability to address the research question?5 Data Analysis5. 1 Quantitative Data AnalysisData analysis is conducted differently for quantitative and qualitative data.
For quantitative data, the analysis typically includes analysis of descriptive statistics, correlation analysis, development of predictive models, and hypothesis testing. All of these activities are relevant in case study research.
Descriptive statistics, such as mean values, standard deviations, histograms and scatter plots, are used to get an understanding of the data that has been collected. Correlation analysis and development of predictive models are conducted in order to describe how a measurement from a later process activity is related to an earlier process measurement.
Hypothesis testing is conducted in order to determine if there is a significant effect of one or several variables (independent variables) on one or several other variables (dependent variables). It should be noticed that methods for quantitative analysis assume a fixed research design.
For example, if a question with a quantitative answer is changed halfway in a series of interviews, this makes it impossible to interpret the mean value of the answers. Further, quantitative data sets from single cases tend to be very small, due to the number of respondents or measurement points, which causes special concerns in the analysis.
Quantitative analysis is not covered any further in this paper, since it is extensively covered in other texts. The rest of this chapter covers qualitative analysis.
For more information about quantitative analysis, refer for example to (Wohlin et al. In study RE and study QC the main analyses were conducted with quantitative methods, mainly through analysis of correlation and descriptive statistics, such as scatter plots.
In the QC case, the quantitative data acted as a trigger for deeper understanding. Patterns in the data, and lack thereof generated questions in the feedback session.
The answers lead to changes in the data analysis, e. filtering out some data sources, and to identification of real patterns in the data. In study XP, the main analysis was conducted with qualitative methods, but this was combined with a limited quantitative analysis of number of defects found during different years in one of the organizations.
However, there would probably have been possibilities to conduct more complementary analyses in order to corroborate or develop the results from the qualitative analysis. 2 Qualitative Data AnalysisSince case study research is a flexible research method, qualitative data analysis methods (Seaman 1999) are commonly used. The basic objective of the analysis is to derive conclusions from the data, keeping a clear chain of evidence.
The chain of evidence means that a reader should be able to follow the derivation of results and conclusions from the collected data (Yin 2003). This means that sufficient information from each step of the study and every decision taken by the researcher must be presented.
In addition to the need to keep a clear chain of evidence in mind, analysis of qualitative research is characterized by having analysis carried out in parallel with the data collection and the need for systematic analysis techniques. Analysis must be carried out in parallel with the data collection since the approach is flexible and that new insights are found during the analysis.
In order to investigate these insights, new data must often be collected, and instrumentation such as interview questionnaires must be updated. The need to be systematic is a direct result of that the data collection techniques can be constantly updated, while the same time being required to maintain a chain of evidence.
In order to reduce bias by individual researchers, the analysis benefits from being conducted by multiple researchers. The preliminary results from each individual researcher is merged into a common analysis result in a second step.
Keeping track and reporting the cooperation scheme helps increasing the validity of the study School of Mechanical, Electrical and Manufacturing Engineering graduates have achieved many outstanding results as highlighted in the case studies below..
1 General Techniques for AnalysisThere are two different parts of data analysis of qualitative data, hypothesis generating techniques and hypothesis confirmation techniques (Seaman 1999), which can be used for exploratory and explanatory case studies, respectively. Hypothesis generation is intended to find hypotheses from the data.
When using these kinds of techniques, there should not be too many hypotheses defined before the analysis is conducted. Instead the researcher should try to be unbiased and open for whatever hypotheses are to be found in the data.
The results of these techniques are the hypotheses as such. Examples of hypotheses generating techniques are “constant comparisons” and “cross-case analysis” (Seaman 1999).
Hypothesis confirmation techniques denote techniques that can be used to confirm that a hypothesis is really true, e. Triangulation and replication are examples of approaches for hypothesis confirmation (Seaman 1999).
Negative case analysis tries to find alternative explanations that reject the hypotheses. These basic types of techniques are used iteratively and in combination.
First hypotheses are generated and then they are confirmed. Hypothesis generation may take place within one cycle of a case study, or with data from one unit of analysis, and hypothesis confirmation may be done with data from another cycle or unit of analysis (Andersson and Runeson 2007b).
This means that analysis of qualitative data is conducted in a series of steps (based on (Robson 2002), p. First the data is coded, which means that parts of the text can be given a code representing a certain theme, area, construct, etc. One code is usually assigned to many pieces of text, and one piece of text can be assigned more than one code.
Codes can form a hierarchy of codes and sub-codes. The coded material can be combined with comments and reflections by the researcher (i.
When this has been done, the researcher can go through the material to identify a first set of hypotheses. This can, for example, be phrases that are similar in different parts of the material, patterns in the data, differences between sub-groups of subjects, etc.
The identified hypotheses can then be used when further data collection is conducted in the field, i. resulting in an iterative approach where data collection and analysis is conducted in parallel as described above. During the iterative process a small set of generalizations can be formulated, eventually resulting in a formalized body of knowledge, which is the final result of the research attempt.
This is, of course, not a simple sequence of steps. Instead, they are executed iteratively and they affect each other.
The activity where hypotheses are identified requires some more information. This is in no way a simple step that can be carried out by following a detailed, mechanical, approach.
Instead it requires ability to generalize, innovative thinking, etc. This can be compared to quantitative analysis, where the majority of the innovative and analytical work of the researcher is in the planning phase (i. There is, of course, also a need for innovative work in the analysis of quantitative data, but it is not as clear as in the planning phase.
In qualitative analysis there are major needs for innovative and analytical work in both phases. One example of a useful technique for analysis is tabulation, where the coded data is arranged in tables, which makes it possible to get an overview of the data.
The data can, for example be organized in a table where the rows represent codes of interest and the columns represent interview subjects. However, how to do this must be decided for every case study.
There are specialized software tools available to support qualitative data analysis, e. However, in some cases standard tools such as word processors and spreadsheet tools are useful when managing the textual data.
In study XP, the transcribed interviews were initially analyzed by one of the researchers. A preliminary set of codes were derived from the informal notes and applied to the transcripts.
The preliminary set of codes was: project model, communication, planning, follow-up, quality, technical issues and attitudes. Each statement in the transcribed interviews was given a unique identification, and classified by two researchers.
The transcribed data was then filled into tables, allowing for analysis of patterns in the data by sorting issues found by, for example, interviewee role or company. The chain of evidence is illustrated with the figure below (from Karlstr m and Runeson2006)5.
2 Level of FormalismA structured approach is, as described above, important in qualitative analysis.
This means, for example, in all cases that a pre-planned approach for analysis must be applied, all decisions taken by the researcher must be recorded, all versions of instrumentation must be kept, links between data, codes, and memos must be explicitly recorded in documentation, etc. However, the analysis can be conducted at different levels of formalism.
In (Robson 2002) the following approaches are mentioned: Immersion approaches: These are the least structured approaches, with very low level of structure, more reliant on intuition and interpretive skills of the researcher. These approaches may be hard to combine with requirements on keeping and communicating a chain of evidence. Editing approaches: These approaches include few a priori codes, i.
Case studies | school of mechanical, electrical and manufacturing
Template approaches: These approaches are more formal and include more a priori based on research questions Research Chairs and Senior Research Fellowships - Industry case studies The Nuclear Decommissioning Authority - industry case study (519.95 KB)..
Quasi-statistical approaches: These approaches are much formalized and include, for example, calculation of frequencies of words and phrases.
To our experience editing approaches and template approaches are most suitable in software engineering case studies. It is hard to present and obtain a clear chain of evidence in informal immersion approaches.
It is also hard to interpret the result of, for example, frequencies of words in documents and interviews. The analysis started with a set of codes (see Section 5. 1), which was extended and modified during the analysis. For example, the code “communication” was split into four codes: “horizontal communication”, “vertical communication”, “internal communication” and, “external communication”.
3 ValidityThe validity of a study denotes the trustworthiness of the results, to what extent the results are true and not biased by the researchers’ subjective point of view. It is, of course, too late to consider the validity during the analysis.
The validity must be addressed during all previous phases of the case study. However, the validity is discussed in this section, since it cannot be finally evaluated until the analysis phase.
There are different ways to classify aspects of validity and threats to validity in the literature. Here we chose a classification scheme which is also used by Yin (2003) and similar to what is usually used in controlled experiments in software engineering (Wohlin et al.
Some researchers have argued for having a different classification scheme for flexible design studies (credibility, transferability, dependability, confirmability), while we prefer to operationalize this scheme for flexible design studies, instead of changing the terms (Robson 2002).
This scheme distinguishes between four aspects of the validity, which can be summarized as follows: Construct validity: This aspect of validity reflect to what extent the operational measures that are studied really represent what the researcher have in mind and what is investigated according to the research questions. If, for example, the constructs discussed in the interview questions are not interpreted in the same way by the researcher and the interviewed persons, there is a threat to the construct validity.
Internal validity: This aspect of validity is of concern when causal relations are examined. When the researcher is investigating whether one factor affects an investigated factor there is a risk that the investigated factor is also affected by a third factor.
If the researcher is not aware of the third factor and/or does not know to what extent it affects the investigated factor, there is a threat to the internal validity. External validity: This aspect of validity is concerned with to what extent it is possible to generalize the findings, and to what extent the findings are of interest to other people outside the investigated case.
During analysis of external validity, the researcher tries to analyze to what extent the findings are of relevance for other cases. There is no population from which a statistically representative sample has been drawn.
However, for case studies, the intention is to enable analytical generalization where the results are extended to cases which have common characteristics and hence for which the findings are relevant, i. Reliability: This aspect is concerned with to what extent the data and the analysis are dependent on the specific researchers.
Hypothetically, if another researcher later on conducted the same study, the result should be the same. Threats to this aspect of validity is, for example, if it is not clear how to code collected data or if questionnaires or interview questions are unclear.
It is, as described above, important to consider the validity of the case study from the beginning. Examples of ways to improve validity are triangulation, developing and maintaining a detailed case study protocol, having designs, protocols, etc.
reviewed by peer researchers, having collected data and obtained results reviewed by case subjects, spending sufficient time with the case, and giving sufficient concern to analysis of “negative cases”, i. looking for theories that contradict your findings. In study XP, validity threats were analyzed based on a checklist by Robson (2002).
It would also have been possible to analyze threats according to construct validity, internal validity, external validity, and reliability. Countermeasures against threats to validity were then taken.
For example, triangulation was achieved in different ways, results were reviewed by case representatives, and potential negative cases were identified by having two researchers working with the same material in parallel. It was also seen as important that sufficient time was spent with the organization in order to understand it.
Even if the case study lasted for a limited time, this threat was lowered by the fact that the researchers had had a long-term cooperation with the organization before the presented case study. data triangulation was used to check which phase the defect reports originated from.
The alignment between the phase reported in the trouble report, and the person’s tasks in the project organization was checked. Is the analysis methodology defined, including roles and review procedures?23.
Is a chain of evidence shown with traceable inferences from data to research questions and existing theory?24. Are alternative perspectives and explanations used in the analysis?25.
Is a cause–effect relation under study? If yes, is it possible to distinguish the cause from other factors in the analysis?26. Are there clear conclusions from the analysis, including recommendations for practice/further research?27.
Are threats to the validity analyzed in a systematic way and countermeasures taken? (Construct, internal, external, reliability)6 ReportingAn empirical study cannot be distinguished from its reporting. The report communicates the findings of the study, but is also the main source of information for judging the quality of the study. Reports may have different audiences, such as peer researchers, policy makers, research sponsors, and industry practitioners (Yin 2003).
This may lead to the need of writing different reports for difference audiences.
Implementation of a case study in an engineering science course: a
journal or conference articles and possibly accompanying technical reports.
propose that due to the extensive amount of data generated in case studies, “books or monographs might be better vehicles to publish case study research” (Benbasat et al.
Guidelines for reporting experiments have been proposed by Jedlitschka and Pfahl (2005) and evaluated by Kitchenham et al.
Their work aims at defining a standardized reporting of experiments that enables cross-study comparisons through e.
For case studies, the same high-level structure may be used, but since they are more flexible and mostly based on qualitative data, the low-level detail is less standardized and more depending on the individual case. Below, we first discuss the characteristics of a case study report and then a proposed structure.
1 CharacteristicsRobson defines a set of characteristics which a case study report should have (Robson 2002), which in summary implies that it should: tell what the study was aboutcommunicate a clear sense of the studied caseprovide a “history of the inquiry” so the reader can see what was done, by whom and how.
provide basic data in focused form, so the reader can make sure that the conclusions are reasonablearticulate the researcher’s conclusions and set them into a context they affect. In addition, this must take place under the balance between researcher’s duty and goal to publish their results, and the companies’ and individuals’ integrity (Amschler Andrews and Pradhan 2001).
Reporting the case study objectives and research questions is quite straightforward. If they are changed substantially over the course of the study, this should be reported to help understanding the case.
Describing the case might be more sensitive, since this might enable identification of the case or its subjects. For example, “a large telecommunications company in Sweden” is most probably a branch of the Ericsson Corporation.
However, the case may be better characterized by other means than application domain and country. Internal characteristics, like size of the studied unit, average age of the personnel, etc may be more interesting than external characteristics like domain and turnover.
Either the case constitutes a small subunit of a large corporation, and then it can hardly be identified among the many subunits, or it is a small company and hence it is hard to identify it among many candidates. Providing a “history of the inquiry” requires a level of substantially more detail than pure reporting of used methodologies, e. “we launched a case study using semi-structured interviews”. Since the validity of the study is highly related to what is done, by whom and how, it must be reported about the sequence of actions and roles acting in the study process.
On the other hand, there is no room for every single detail of the case study conduct, and hence a balance must be found. Data is collected in abundance in a qualitative study, and the analysis has as its main focus to reduce and organize data to provide a chain of evidence for the conclusions.
However, to establish trust in the study, the reader needs relevant snapshots from the data that support the conclusions. citations (typical or special statements), pictures, or narratives with anonymized subjects.
Further, categories used in the data classification, leading to certain conclusions may help the reader follow the chain of evidence. Finally, the conclusions must be reported and set into a context of implications, e.
A case study can not be generalized in the meaning of being representative of a population, but this is not the only way of achieving and transferring knowledge. Conclusions can be drawn without statistics, and they may be interpreted and related to other cases.
Communicating research results in terms of theories is an underdeveloped practice in software engineering (Hannay et al. 2 StructureLinear-analytic—the standard research report structure (problem, related work, methods, analysis, conclusions)Comparative—the same case is repeated twice or more to compare alternative descriptions, explanations or points of view.
Chronological—a structure most suitable for longitudinal studies. Theory-building—presents the case according to some theory-building logic in order to constitute a chain of evidence for a theory.
Suspense—reverts the linear-analytic structure and reports conclusions first and then backs them up with evidence. when reporting general characteristics of a set of cases.
For the academic reporting of case studies which we focus on, the linear-analytic structure is the most accepted structure. The high level structure for reporting experiments in software engineering proposed by Jedlitschka and Pfahl (2005) therefore also fits the purpose of case study reporting.
However, some changes are needed, based on specific characteristics of case studies and other issues based on an evaluation conducted by Kitchenham et al. The differences and our considerations are presented below.
Table 9 Proposed reporting structure by Jedlitschka and Pfahl (2005) and modification proposed by Kitchenham et al. (2008) and adaptations to case study reporting, influenced by Robson (2002)Experiments