Using Google Forms to collect survey responses from students allowed me to take advantage of its built-in analysis aiding features. The tool allows the survey creator to view responses both in summary form (all responses on the same page, organized by question), and as individual responses (all questions on a single page, with a different page for each response received). In summary view, for multiple choice and selected responses questions, including demographic information, Forms compiled all student answers into easy-to-read, color-coded graphics such as pie charts and bar graphs. For short answers, the summary view allowed me to quickly and easily read through all student responses to a question. This view allowed me to gain a general idea of student feedback and to compile a list of potential categories for organizing the responses to each question before more carefully analyzing and categorizing the written data.
In order to make the best of both the summary and individual forms of data reporting on Google Forms, I took advantage of the ability to convert the survey results into a Google Sheet. The spreadsheet format allowed me to view information from all of the student responses at once, while also being able to see which responses came from which students (anonymously numbered). This allowed me not only to more easily categorize responses, but to look for telling factors among student responses (for example, were students who found that they learned better using document-based learning also better able to identify and explain key historical thinking skills?).
Not counting the automatic quantitative analysis performed by Google Forms, I began my analysis with a qualitative study of students’ answers to the survey’s short answer questions. In order to organize this analysis, I first used the summary view of Google Forms to look through all student responses to a particular question and extract common themes in responses which could serve as categories for that question. Next, using the Sheets view of the data in order to keep a sense of how each question response fit into a student’s overall survey responses, I sorted student responses for each question into the categories that I had determined during my first read-through. In some cases, I renamed, added, removed, or combined categories in a fashion that seemed to better fit the data upon the second read-through.
As I sorted responses for each question into categories, I also noted how the student for each response had answered the first question on the survey:
“Document Based Learning is when you answer questions and reach conclusions based on your analysis of historical documents. Questions and conclusions can be based on a single document, or require you to use multiple documents. Briefly (one or two sentences) describe one time when you remember using Document Based Learning this semester in history class or for a history class assignment.”
Specifically, I noted whether the student had positively identified an instance of DBL (as I defined it for this study), had negatively identified an instance of DBL, or was possibly able to positively identify an instance of DBL (the student either gave an unclear answer or did not answer the question).
My goal in doing this was not necessarily to judge whether or not students were capable of identifying document-based learning. First of all, even students who gave one incorrect example in this survey may have also been able to give a correct answer. Furthermore, as I realized after giving the survey, my description of document-based learning was likely not specific enough to help students eliminate content-based textbook reading questions and report-style projects as examples. (These two types of assignments in their class involved documents, but were more tailored toward finding and reporting content knowledge than toward practicing skills like sourcing and contextualization.) All in all, I did not want to assume that a student’s answer to this question definitively showed whether or not he or she knew what document-based learning was.
Instead, my main purpose in connecting the first question in the survey to the rest of the questions was to (whenever possible) determine whether, when a student offered certain insights and opinions on document-based learning, they were envisioning the same type of lesson as I was. This enabled me to look for patterns in overall student responses in relation to their general understanding of what DBL was. For example, with the prompt, “I feel like Document Based Learning makes me understand class material ____________ than taking notes directly from a textbook or listening to a lecture,” a student could have answered “much better” regardless of whether he or she had positively identified DBL or not. If students who had positively identified a document-based lesson found DBL more beneficial than students who had identified a different type of lesson, it would strengthen the case for a positive student perspective on DBL as I studied it. If there were no difference between the two, I would have to consider why students found other types of lessons to be just as effective as document-based lessons. In this way, all student answers could have value whether they had positively identified an instance of document-based learning or not.
From this qualitative analysis, I was also able to develop quantitative data in order to gain a more concrete picture of the distribution and trends of student responses. I determined the percentages of responses which fell into certain qualitative categories, and combined this quantitative analysis with other quantitative data organized by Google Forms.
Comparison to Teacher Responses and Classroom Observations
Since this study was not conducted with a control group, and I did not have personal control over the variables of classroom activities, I was generally unable to make broader comparisons concerning student responses DBL versus other approaches to teaching and learning history. I was also unable to compare the teaching practices used in the classroom I studied with those of other non-AP history classrooms, whether in the same school, in a different district in the same area, or in a contrasting setting across the country.
However, in order to establish as clear a picture as possible of student experiences with document-based learning in this classroom, I was able to compare the student survey responses to two other points of data collection: the teacher survey, and my own observations of the classroom. The teacher responses allowed me to compare the instructor’s intentions and perceptions of document-based learning to those of his students in order to see to what extent student responses reflected the teacher perspective. My classroom observations allowed me to see document-based learning in action, and to note such factors as student engagement, student performance of historical skills, and the general practicality and feel of both document-based and non-document-based lessons.