#1: Jones, A., Scanlona, E., Tosunoglu, C., Morris, E., Ross, S., Butcher, P., & Greenberg, J. (1999). Contexts for evaluating educational software. Interacting with Computers, 11, 499-516. In this paper, the authors discuss the evaluation of educational software as it relates to the human computer interface (HCI) community and the educational technology community. The major concern for the authors is that when evaluations are developed for educational software, it tends to lean toward the needs of either the HCI or the educational technology community, but not both. Given this situation, the authors discuss how the two community differ in their evaluation and where there is actual overlap in their practices. The authors goes on to discuss a new model of evaluation named CIAO!, which stands for context, interactions, attitudes and outcomes. This model is an evaluation model for computer-assisted learning (CAL) in distance education. The authors use a variety of studies and research to discuss issues that occur in both communities by using the CIAO! model. In their discovery, the model did show some benefit; however, there were two troublesome areas, outcomes and observations. Ultimately, where the CIAO! Model falls short is related to teachers being involved closely in the evaluation and student monitoring during the evaluation. #2: Winslow, J., Dickerson, J., & Lee, C. (2013). Evaluating multimedia. Applied Technologies for Teachers (pp. 251-264). Dubuque, IA: Kendall Hunt. In this paper, the authors discuss the increasing popularity of multimedia learning material due to the growth in technology and its ultimate decrease in cost. The authors understand that more learners are experiencing technological skills earlier in age, thereby creating a need to create engaging multimedia learning materials. Given this knowledge, the papers presents an eight-dimension evaluation model that can be used a guidance in developing and selecting multimedia for instructional activities, specifically targeting students and teachers. The authors present the definitions of multimedia, how technology has affect the use of multimedia, what it means to have multimedia in learning, and the principles in evaluating multimedia learning resources along with their empirical results. The principles listed are as follows: multimedia, contiguity, split-attention, individual differences, coherence, and redundancy. The authors then discuss a framework called the Learning Object Review Instrument (LORI). This evaluation aid has eight dimensions: content quality, learning goal alignment, feedback and adaptation, motivation, presentation design, interaction usability, reusability, and standards compliance. Given the importance of finding high-quality resources, for teachers, the LORI framework is available to help in evaluating the many applications that are available.
#3: Lee, C. & Cherner, T. S. (2015). A comprehensive evaluation rubric for assessing instructional apps. Journal of Information Technology Education: Research, 14, 21-53. Retrieved January 22, 2015 from http://www.jite.org/documents/Vol14/JITEV14ResearchP021-053Yuan0700.pdf. In this paper, the authors discuss the need for an evaluation rubric that examines educational applications designed for instructional purposes. The authors note that there are many rubrics available for evaluating educational computer-based programs, but not many focused on evaluating instructional implications of educational applications. In response to this need, the author’s presenter a comprehensive rubric containing 24-evaluative dimension specifically used to analyze the educational potential of instructional applications. This gives teachers the ability to determine quality applications from those that will not be beneficial for their educational goals. The authors list a variety of reasons why previously created rubrics are not useful in determining quality applications, Some reasons include the fact that some 21st century skills are not address in the rubrics, evaluation dimensions are too vague, and some rubrics were simply not fully developed. The evaluation rubric presented in this paper has 24 evaluation dimensions, each based on the 5-point Likert scale, allowing for quantitative measures for each dimension. The dimensions are categorized into three domains: Instruction, design, and engagement. The rubric was verified by two groups of experts. Finally, the authors discuss limitations of rubric. These include the inability to design a rubric that is appropriate for all applications and as applications change, the rubric will also need to change.
#4: Green, L. S., Hechter, R. P., Tysinger, P. D., & Chassereau, K. D. (2014). Mobile app selection for 5th through 12th grade science: The development of the MASS rubric. Computers & Education, 75, 65-71. In this paper, the authors discuss the need for a rubric to evaluate the many applications available for tablets to be used by K-12 students. This research comes from the benefit of using a table over a laptop or desktop. Some of the advantages listed include portability, touch screen features, and the many applications available. Because there are so many applications for these mobile devices, teachers struggle to find a high-quality application for their class. The rubric developed for this paper has a focus on mobile science applications. Data was collected, qualitative and quantitative, during four design cycles. The result was the Mobile App Selection for Science (MASS) rubric. The rubric has six dimensions based on a 4-point response scale. I chose this paper because I am interested in developing rubrics for assignment in my programming course. I began this semester, but I never thought about rubrics for software applications. I think this is an interesting area of research and I will look into it further to aid in choosing mobile and non-mobile applications for use in my classes.