Adaptive learning systems have been suggested as highly promising means to support work-integrated learning. Adaptivity in a learning system is the result of a complex interaction between a domain model, a learner model and an instructional model. Existing approaches for evaluating adaptive learning systems take into account the learner model and the instructional model; the domain model, however, is mostly neglected. In my thesis, I conceptualize required properties of the domain model of an adaptive work-integrated learning system and derive four research questions: (Q1) Is the set of tasks exhaustive and conceptually correct?, (Q2) Is the set of skills exhaustive and conceptually correct?, (Q3) Is the requires relation conceptually correct (content validity)?, and (Q4) Are inferences made from the requires relation correct (application validity)? Then, I suggest a variety of methods for data collection and data analysis for answering these research questions. Each of the methods was applied in one or several out of 6 case and method studies in 4 different domains. Case Study 1 was a rigorous validation study in the domain of statistical data analysis (student domain) focusing on Q1, Q2 and Q3. In Case Study 2, Q1 and Q2 were looked at for the real work domain of aircraft simulation. Method Study 1 and 2 deal with the reliability and validity of self-assessment and peer-assessment. In Case Study 3, Q4 was tested in two real work domains, aircraft simulation and innovation management. In Case Study 4, a validation session was carried out in the innovation management domain to answer Q1 and Q2. Based on the findings from the studies, methodological considerations for each method are extensively discussed. The dissertation shows how the techniques can be used for validating the domain model of an adaptive work-integrated learning system, how they can be combined, and what conclusions can be drawn from the outcomes.