IJCRR - 7(14), July, 2015
Pages: 61-68
Print Article
Download XML Download PDF
A FRAMEWORK FOR A HEURISTIC APPROACH TO EVALUATING AND ASSESSING ADAPTIVE HYPERMEDIA LEARNING SYSTEMS
Author: Jean-Pierre Kabeya Lukusa
Category: Healthcare
Abstract:This article provides a holistic view of e-Learning as a by-product of Adaptive Hypermedia Learning Systems (i.e. AHLS). It aims at proposing a generic framework for evaluating and assessing AHLS. While many of the existing assessment and evaluation instruments yield useful findings (1), most of them seem to be revolving around one key problem \? with so many variables that can potentially be considered of impact to the quality of these instruments, how do we re-adapt the assessment and evaluation instruments to produce results that are relevant to our learners' ethnographic background1, pedagogical paradigm2, and the actual AHLS. It was also found that many authors choose to disregard (i.e. consciously or otherwise) some variables (2). This practice needs to be discouraged as it not only results in constraining findings but also distorts analysis of the flaws (and strengths) in current AHLS deployments. The proposed framework is designed on a premise that considers a number of studies namely; the E-VAL project models - considering factors from ethnographic, pedagogical, and applicable AHLS; and the Learning Object Review Instrument (LORI) \? considering the nine dimensions of quality (3).
Keywords: Adaptive hypermedia learning system (i.e. AHLS), Ethnographic research methods, Pedagogical paradigm, Heuristic approach, e-Learning assessment, Evaluation framework
Full Text:
INTRODUCTION
This paper is aimed at proposing a heuristic framework tailored with the sole intent of improving existing evaluation and assessment methodologies used in determining the level of effectiveness of Adaptive Hypermedia Learning Systems (AHLS). AHLS or also known as Adaptive Educational Hypermedia Systems (AEHS) are reusable learning resources specifically designed to customize educational contents to an individual learner’s preferences (1). In order to better model the proposed framework, the author adopts a learner-centric (4) approach geared at viewing elearning ventures as by-products of AHLS. In light of this, care has been taken to generalize the proposed framework and present it in a universal format in order to ease adaptability to e-Learning Assessment or Evaluation studies.
BACKGROUND TO THE STUDY
AHLS are designed with the single intent of providing an educational content tailored to an individual learner (5). This is done by employing a ‘user model’ built on the basis of parameters derived from human factors (1). These human factors play an important role in the conception and development of AHLS thereby ensuring that their educational context range from gender differences (6) through prior knowledge to cognitive styles (7). Therefore the approach in AHLS is that just as people differ in many aspects, so do ways in which they learn (8). The argument raised is that of individual learners in need of personalized learning styles. Adaptive Hypermedia Learning Systems (AHLS) are specifically designed to provide this personalized service. AHLS closes this functional gap in Learning Management Systems (LMS) by providing a deployment approach that tailors a given learning experience (i.e. content and class activities) to an individual learner’s needs (9).
PROBLEM STATEMENT
A review of the existing literature has been conducted leading to the conclusion that most instruments developed for the Evaluation and Assessment of AHLS are justified through ethnographic research methods (i.e. handing questionnaires to participants) (2)(10). While many of these tools bore useful findings, most seem to revolve around one key question how do we re-adapt the assessment and evaluation instruments to produce results that are relevant to our learners’ ever changing ethnographic background, pedagogical paradigm, and the actual adaptive hypermedia learning system. The usual practice of disregarding some variables(11) is one to be discouraged as it not only results in constraining findings but also distorts analysis of the flaws (and strengths) in current e-Learning venture.
MOTIVATION
In addition to bringing a number of benefits such as low cost, ease of access, and user convenience; the primary concern of e-learning ventures is to improve existing learning processes and provide means for an easy integration of new teaching strategies(10). One key challenge faced by many researchers in this regard is to come up with the right approach of evaluating and assessing the entire e-learning venture. Another major problem is to decide on the inclusion of variables(12) that may potentially have an impact in the study design (2) and determine what constitutes dependent, independent, and irrelevant variables in the study(13). Depending on the scale of the study, this inherent problem, may bias the conclusions and prevent the study from accurately gauging the significance of the selected variables or missing them altogether (2). To derive the pool of variables of interest, a review of numerous studies including the E-VAL project and the LORI project was conducted. To avoid discarding important variables, cluster analysis (14) was used to rather converge them into homogeneous groups. The proposed framework is therefore presented as a heuristic approach for an assessment and evaluation methodology; offering a framework that is both generic and adaptable to e-learning systems as by-products of AHLS.
CONCEPTUAL FRAMEWORK
In order to establish a baseline to the study, the proposed framework will be referred to as the e-Val Framework in which ‘e’ stands for e-Learning and ‘Val’ for the valuation process that one goes through to determine the actual benefits of an e-Learning venture. The following diagram is used to demonstrate how the e-Val Framework would work in a given e-Learning scenario.
It is therefore hoped that the e-Val Framework may provide both a means for conducting an assessment aimed at increasing the level of quality and/or performing an evaluation with the intent of judging the level of quality of the e-Learning venture (15).
THE E-VAL FRAMEWORK: A TOOL FOR PARAMETRIC DEDUCTIONS
Derived e-Learning Ventures Characteristics and Factors The related studies reviewed revealed the list of generic factors presented in table 1 and that of characteristics presented
CLUSTER ANALYSIS: MEASURING HOMOGENEITY
Guiding Assumptions
Representativeness of the Sample For the initial run of the model, our pilot study group consisted of 32 randomly selected3 volunteers. The volunteers’ ranking of each of the 32 key characteristics against factors was considered and an average measure was drawn for each of the characteristics for processing. To determine the factorspecific groups of homogeneous objects, the K-Means Cluster Algorithm (12) was used. We refer to these factor-specific groups as clusters.
As illustrated in figure 3, each of the coded characteristics was ranked on a factor-specific scale (i.e. counts not recurring within characteristics). To tie the characteristics to factors, a Likert scale of 1 to 6 was used and the collected data was processed using SPSS.
Reduced Impact of Multicollinearity
Multicollinearity occurs when two or more predictor variables in a multiple regression model are highly correlated; allowing one variable to linearly predict the other(s) with a non-trivial degree of accuracy (18). Multicollinearity is an important assumption for researchers who wish to introduce new variables within the identified characteristics or introduce new characteristics altogether to improve the model factors. As a recommendation to future studies; to avoid any effect (19) due to Multicollinearity on the predictor variables one should:
• Reduce the variables to equal numbers in each set of correlated measures, or
• Use a distance measure that compensates for the correlation, such as Mahalanobis(12) distance
Deriving Clusters
In order to ensure that the identified characteristics (i.e. LC, LH, LA, etc…) in table 2 are properly classified according to identified generic factors in table 1, cluster analysis techniques are employed to guarantee that the resulting factorbased grouping (18) of the characteristics exhibit high internal (within cluster) homogeneity and high external (between cluster) heterogeneity (19).
Defining the Identified Clusters
To facilitate adaptation to multiple studies, a common language by means of First Order Logic (FOL)(17)(16) to present the resultant clusters (i.e. the independent variables) will be adopted. Cluster 01: Learner’s Aptitude
Cluster 02: Learning Context Suitability
Cluster 03: Technological Appropriateness???????
Cluster 04: Pedagogical Alignment???????
Cluster 05: Quality Alignment
Cluster 06: Learning Environment’s Appropriateness???????
THREATS TO VALIDITY OF THE E-VAL FRAMEWORK REGRESSION MODELS
In order to avoid a biased estimator of the causal effect due to the identified regressors5 , one needs to ensure that the statistical inferences about causal effects are valid for the population being studied (i.e. Internal validity) and that they can be generalized from the population and setting studied to other populations and settings (i.e. external ) (19). In this context the term “settings” refers to the legal, policy, physical environment and other related salient features.
MODEL VALIDITY
This section is used to demonstrate how the multiple regression models described in the e-Val Framework can be used to conduct assessment and/or evaluation in an actual study. For the sake of brevity, the regression model for learner’s aptitude will be used. Adapting the e-Val Framework entirely depends on how a given researcher presents his/her research objectives. To demonstrate this, let us suppose that our aim is to run an experiment where: An individual learner’s characteristic is described using the following variables:
• LC described in terms of age: LCA, sex: LCS, and visual impairment: LCV of the respondent.
• LH described in terms of experience rating: LHE, level of attainment: LHL, and enrolment Program duration: LHD.
• LA described in terms of attitude of learner towards learning experience: LAE
• LM described in terms of learner’s motivation towards a given learning style: LMS
• LF described in terms of learner’s familiarity with the technology: LFT
Therefore for the tuple L:
It can be deduced from table 6 that the F-ratio is not statistically significant. Hence the conclusion that the assumption of equal variances is tenable (i.e. there is homogeneity of variance).
Therefore, on the basis of the data provided, there is insufficient evidence to conclude that the learner’s attitude towards a given learning experience has a significant effect on his/her level of aptitude for significance level of 1% and, thus of 5% and 10%.
QUESTION 2: Does the learner’s visual impairment have a negative effect on his/her level of aptitude?
Therefore, on the basis of the data provided, there is insufficient evidence to conclude that the learner’s visual impairment has a significantly negative effect on his/her level of aptitude for significance level of 1%.
CONCLUSION AND FUTURE WORK
This paper is a step in the right direction in the development of robust and generic research methodologies for measuring the effectiveness of e-learning ventures through evaluation and assessment. Consented effort has been made to ensure that the proposed e-Val Framework is not only grounded on solid theoretical precepts but also presented in a format that is comprehensive, generic, and adaptable enough for an easy integration as a measuring tool to numerous studies. A deliberate measure had been taken to enable researchers to have more flexibility in to use the e-Val Framework with the assumption that they would adapt it to their specific needs. Moreover, one might instead of sticking to a purely multiple regression approach adopt a multivariate approach by combining the individual regressor equations to derive a more unified measure of effectiveness centered on the user, e-learning resource, quality of learning style, etc…
ACKNOWLEDGEMENTS
I would like to thank the Office of Research and Quality Management at Botho University for sponsoring the presentation costs of this paper. I would also like to thank colleagues, and friends who have taken time to review and provide much needed feedback. Their objective comments have truly made this paper a possible. I would also like to take the opportunity in acknowledging the immense help received from the scholars whose articles are cited and included in references of this manuscript and am grateful to authors/editors/publishers of all those articles, journals and books from which the literature for this article has been reviewed and discussed.
References:
1. Alomyan H. Individual Differences: Implications for WebBased Learning Design. International Education Journal. 2004; 4(4): p. 188-196.
2. Attwell G. Evaluating E-Learning: A Guide to the Evaluation of E-Learning. 2nd ed. Gremen: Evaluate Europe Handbook Series; 2006.
3. Leacock TL,and NJC. A Framework for Evaluating the Quality of Multimedia Learning Resources. Educational Technology and Society. 2007; 10(2): p. 44-59.
4. Hall Haley M. Learner-centered instruction and the theory of multiple intelligences with second language learners. The Teachers College Record. 2004; 106(1): p. 163-180.
5. Graf S,and KK. Providing Adaptive Courses in Learning Management Systems with Respect to Learning Styles. World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education. 2007; 2007(1): p. 2576-2583.
6. Schumacher P,and MMJ. Gender, Internet and Computer Attitudes and Experiences. Computers in Human Behavior. 2001; 17(1): p. 95-110.
7. Mitchell TJF,CSY,and MRD. Cognitive Styles and Adaptive Webbased Learning. Psychology of Education Review. 2005; 29(1): p. 34-42.
8. Morrison GR,RSM,KJE,and KH. Designing Effective Instruction New York: John Wiley and Sons; 2010.
9. Oneto L,AF,HE,and SD. Making Today's Learning Management Systems Adaptive. In ; 2009: Learning Management Systems meet Adaptive Learning Environments, Workshop at European Conference on Technology Enhanced Learning (ECTEL).
10. Cheniti-Belcadhi L,BR,HN,and NW. A Generic Framework for Assessment in Adaptive Educational Hypermedia. In ; 2004; Madrid: Springer. p. 397-404.
11. Henze N,and NW. Logically characterizing adaptive educational hypermedia systems. In ; 2003; Budapest: International Workshop on Adaptive Hypermedia and Adaptive Web-based Systems.
12. Morrison DG. Measurement Problems in Cluster Analysis. Management Science. 1967; 13(12): p. 775-80.
13. MacQueen J. Some Methods for Classification and Analysis of Multivariate Observations. Proceedings of the Fifth Berkely Symposium on Mathematical Statistics and Probability. 1967; 1(14): p. 281-297.
14. Van der Laan M,PK,and BJ. A New Partitioning Around Memoids Algorithm. Journal of Statistical Computation and Simulation. 2003; 73(8): p. 575-584.
15. Angelo TA,and CKP. Classroom Assessment Techniques. In Handbook for College Teachers. San Francisco: Wiley Imprint; 1993. p. 427.
16. Russel S.NP. Artificial Intelligence: A modern approach. 2nd ed. Englewood Cliffs: Prentice Hall; 2009.
17. Brusilovsky P,and ME. User Models for Adaptive Hypermedia and Adaptive Educational Systems. 1st ed. Pittsburgh: SpringerVerlag Berlin, Heidelberg; 2007.
18. Kenneth D. Typologies and Taxonomies: An Introduction to Classification Techniques. 102nd ed. Oaks: Sage; 1994.
19. Hair JF, BWC, BBJ, ARE, and TRL. Multivariate Data Analysis. 6th ed. New York: Pearson Prentice Hall; 2006.
20. Reynolds A, RG, and RSV. The Application of K-Memoids and PAM to the Clustering of Rules. In 2004 I, editor. Intelligent Data Engineering and Automated Learning - IDEAL 2004. Exeter: Springer-Verlag Berlin Heidelberg; 2004. p. 173-178.
21. Pressey SL. A Simple Apparatus which gives tests and scores - and teaches. School and Society. 1926; 23(586): p. 373-376. 7; 25(645): p. 549-552.
22. Pressey SL. A Machine for Automatic Teaching of Drill Material. School and Society. 1927; 25(645): p. 549-552.
|