How to learn domain and computer simulation learning environment

How to learn domain and computer simulation learning environment

How to learn domain and computer simulation learning environment


Introduction
In this study, a SIMQUEST (van Joolingen & de Jong, 2003) application was used. One electrical high-pass filter and two low-pass filters were simulated. A low-pass filter is a circuit offering easy passage to low-frequency signals and difficult passage to high frequency signals, while a high-pass filter's task is just the opposite. Filters are built with two elements: 


A resistor (R) and a coil (L), or a resistor and a capacitor (C). In general, the theme of filters and the passage of signals is a difficult subject. In designing the application, we used a series of four simulation interfaces for each of the three filters, presented in the same order for each filter. Complexity of the interfaces in the simulation was increased gradually, (see, e.g.,White & Frederiksen, 1990). 


1- Each series started with a simple interface presenting 
The elements of the filter, so that students could learn how the individual elements react to frequency changes. The second interface is shown in the left window of Figure 1. In the “Change variables” box the values of one or more variables can be changed. The output variables are visible in the “Results” box, in the “resistance diagram” and in the graph. 


The third interface focused on Uout and the current I for the whole frequency range. The fourth interface showed a graphical representation of the transfer function, plotting Uout/Uin as a function of the frequency. The support Students in the experimental condition were asked to design assignments so that an “imaginary” fellow student could learn from the simulation. 


2- For this task we created a support structure that guided
The students through three consecutive steps: LOOK (orientation on the simulation), EXPERIMENT (experimentation with the simulation), and DESIGN (designing assignments). The rationale behind these steps, which we called LED, is that we want to focus students’ attention on the relations that are important in the domain (Swaak, van Joolingen, & de Jong, 1998). 


After students have acquired knowledge, they are asked to make this knowledge explicit in developing a question, the correct answer and the explanation of that answer. The support was presented in the simulation environment together with a set of paper-and-pencil worksheets (called LED-sheet hereafter). 


3- The online-support in the simulation consisted of 
Assignments, tips, and overviews. This support was available in a window next to the simulation interface (Figure 1). The LED-sheets matched the structure of the on-line support; for example, if on-line support asked students to investigate a certain relation, instructions on the associated sheet supported students in making notes about their investigations.
Knowledge test Knowledge was assessed using a paper-and-pencil (post-)test. 


The knowledge test consisted of two parts: one set of items intended to measure conceptual (insight) knowledge, and a second set of items focused on measuring procedural (calculation) knowledge. All items were scored by a rater who was blind to the condition of the participant who had taken the test. Both the test and the answering key were developed together with the teacher. 


4- Conceptual knowledge 
(insight into the cause-effect relations in the domain) was measured by items in which students were asked to predict or explain the effect of a change. Students received points for correct answers and for their reasoning. In the example shown in Figure 2, the student not only had to choose a situation, but also had to give a reason for their choice. 


There were a total of 28 conceptual items, with a maximum total score of 50 points; the maximum point value per item depended on its complexity (13 items with a maximum of 1 point, 9 with a maximum of 2 points, 5 with 3 points and 1 with 4 points). Reliability analysis of the test resulted in a Cronbach's alpha of 0.80. 


5- Two judges independently scored 
The answers to the conceptual knowledge items for ten percent of the data, with inter-rater agreement reaching 0.70 (Cohen’s kappa). Procedural knowledge was measured by test items in which students were asked to perform calculations. Students received points for the calculation procedure and the correct answer. 


There were a total of 6 procedural items with a maximum total score of 15 points; the maximum point value per item depended on the its complexity (1 item question with 1 point possible, 3 with a maximum of 2 points, 2 with a maximum of 4 points). An example of a procedural item is presented in Figure 3. Reliability analysis of the test resulted in a Cronbach's alpha of 0.64.


6-  Two judges independently scored the answers to 
The procedural knowledge items for ten percent of the data, with interrater agreement reaching 0.76 (Cohen’s kappa). There were a total of nine introductory items, that were used to “warm up” the students. These items referred to general domain knowledge and were not analyzed


In this phase, the main goal was to design an assignment about the observations made and the knowledge acquired during previous phases. Students were supported in using this knowledge and making it explicit in their design. 


7- In generating a question, they were instructed 
To pose a question about the observations they had made. In formulating the answer, they were advised to check the correctness of the answer with the help of the simulation. In generating the explanation for their assignment, they were advised to explain the answer in detail, and to make use of calculations, representations, and observations.


For each interface, except for the fourth one, students went through the three LED phases. Knowledge test Knowledge was assessed using a paper-and-pencil (post-)test. The knowledge test consisted of two parts: one set of items intended to measure conceptual (insight) knowledge, and a second set of items focused on measuring procedural (calculation) knowledge. 


8- All items were scored by a rater who was blind 
To the condition of the participant who had taken the test. Both the test and the answering key were developed together with the teacher. Conceptual knowledge (insight into the cause-effect relations in the domain) was measured by items in which students were asked to predict or explain the effect of a change. 


Students received points for correct answers and for their reasoning. In the example shown in Figure 2, the student not only had to choose a situation, but also had to give a reason for their choice. There were a total of 28 conceptual items, with a maximum total score of 50 points; the maximum point value per item depended on its complexity (13 items with a maximum of 1 point, 9 with a maximum of 2 points, 5 with 3 points and 


1 with 4 points). Reliability analysis of the test resulted in a Cronbach's alpha of 0.80. Two judges independently scored the answers to the conceptual knowledge items for ten percent of the data, with inter-rater agreement reaching 0.70 (Cohen’s kappa). 


9- Procedural knowledge was measured by test items 
In which students were asked to perform calculations. Students received points for the calculation procedure and the correct answer. There were a total of 6 procedural items with a maximum total score of 15 points; the maximum point value per item depended on the its complexity (1 item question with 1 point possible, 3 with a maximum of 


2 points, 2 with a maximum of 4 points). An example of a procedural item is presented in Figure 3. Reliability analysis of the test resulted in a Cronbach's alpha of 0.64. Two judges independently scored the answers to the procedural knowledge items for ten percent of the data, with interrater agreement reaching 0.76 (Cohen’s kappa). There were a total of nine introductory items, that were used to “warm up” the students. These items referred to general domain knowledge and were not analyzed


Conclusion.
In three two-hour sessions, students in the experimental condition went through the simulations of each of the three filters. At the beginning of the first lesson, the experimenter introduced the students to the SIMQUEST learning environment. For the design task, the experimenter explained the three phases in the design approach and told the students how to use the LED-Sheets. 

During the first lesson, students worked with the simulation of the first filter. At the end of the lesson, all LED-sheets were collected. At the beginning of the second and the third lessons, the LEDsheets were returned to the students and students continued where they had stopped the lesson before. Near the end

Post a Comment

Previous Post Next Post