As technology becomes more and more advanced, people want to use it for more and more things in an attempt to make their lives easier. But, when it comes to education, is technology more harmful than helpful? Most people would likely think that distance education began with the advent of computers, but that is very untrue. In the sass’s, in England and America, a system was developed called Correspondence Education. Correspondence Education utilized the Postal systems to enable people who could not attend Universities in a conventional manner to still seek an education.
Due to the popularity of the Correspondence system the National Home Study Counsel was formed to ensure the quality of student’s education and to deal with other problems of the system (“What is distance,” 2005-2011 As technology progressed, new forms of education were created. At one point in time radios were used, and televisions, and even phones. Now the primary equipment used in distance education is the one thing many people world-wide feel they can’t live without: computers (“What is distance,” 2005-2011). This paper is a review of several articles that relate to online education.
Several of said articles will be comparing online education to what is often referred to as “face-to-face” education, but in this paper will be called conventional education. It will present information from several different perspectives, such as a sociological perspective or business perspective. The main purpose of this paper is to examine whether online education is less effectual from an educational standpoint. For the most part, the articles are a bit less scientific than would be preferred. It seems that most of the more scientific fields have yet to do much research about this topic.
This article is a study done at North Carolina State University by several Sociology professors. The article refers to the study as “a quasi-experimental design to assess differences in student performance and satisfaction across online and face-to-face (OFF) classroom settings” (Drills, Chichi, Hunt, Ticktack’s & Thompson, 2012). This article has three hypotheses. The first is that there would not be a significant difference between the groups in regards to exam performance. The second is that there would not be a significant difference between the two groups in regards to performance on he data analysis assignment.
And the third hypothesis, that there would be no significant difference in the reported satisfaction of the two groups students. They tested their hypothesis by comparing “student satisfaction and student performance on midterm exams and an integrating data analysis assignment” (Drills, Chichi, Hunt, Ticktack’s & Thompson, 2012). This apparently means they looked at the student’s grades and had the students do an online survey. The data was collected from two groups. One group consisted of three online classes and the other consisted of three conventional classes.
Between these two groups there were 368 students that actually replied to the survey and more of the conventional students replied than the online students (Drills, Chichi, Hunt, Ticktack’s & Thompson, 2012). The article has four tables that explain the result of the experiment The first table shows the population demographics of the two groups. The main differences between the populations of the two groups were the types of students in them. The online students tended to be older than the conventional students, have lower Gaps, be enrolled in fewer credit hours and work more hours each week.
The second table shows that there is a significant difference between the exam performances of the two groups, in that the online group did perform much worse than the conventional group. However, the article goes on to explain that when Gaps are factored in it becomes quite easy to see that the reason for this is that stringer students with better Gaps are more likely to attend conventional classes and weaker students with lower Gaps are more likely to choose to take the online class. This apparently supports the first hypothesis.
The third table shows mostly the same results as table two in that the only significant results can be attributed to the differences Of the Gaps Of the two groups. This supports the second hypothesis. The fourth table shows students satisfaction with the course they took and there was, once again, no significant difference, which supports the third hypothesis (Drills, Chichi, Hunt, Ticktack’s & Thompson, 2012). The weaknesses of this study are that the groups were not truly random, as students are of course allowed to choose what classes they take.
This had an effect on the data as, as discussed previously, it was shown that the online classes tended to do worse on exams because of the type of students that refer online classes tend to have lower Gaps than those who prefer conventional classes. Also, there is the fact that they used the students self- reported GAP instead of using the students’ academic records. And the study only examined multiple sections of one course that was taught by one teacher.
This helped with controlling the quality of the instruction provided to the students, but it means that the study does not truly have generalization. The study also found that the students who thought that interacting with the instructor was important in regards to their success in a course do worse than students who do not. The author of the article believes this to be due to the fact that weaker students tend to rely more upon the instructor (Drills, Chichi, Hunt, Ticktack’s & Thompson, 2012).
As a counterpoint, here is an article about a similar study where the students were randomly assigned. In this study, rather than looking at an entire course this study looks at a single set of lessons, designed to be as similar as possible, with one being an online class and the other a conventional class. The point of this is to reduce as much outside interference as possible. There were 59 subjects and they were given two questionnaires, en before the lesson and one after. The lesson was a grammar lesson about apostrophe usage and included a 25 question test.
The hypothesis appears to be that students of online education courses do not learn as much as conventional students (Emerson & McKay, 201 1). The results showed that while both groups were confident in their comprehension of the material, the conventional students performed roughly 24% better on the test than the online students (Emerson & McKay, 201 1). It was suggested by the author that there may be a problem with this study as they did not fully utilize the interactive capabilities of online education.
It was also stated that there is difficulty with generalization because the subject of the lesson does not compare to the difficulty of a multi-disciplinary subject. The author states that there should be more research done on modes of learning and whether or not the subject matter changes the efficacy of online education (Emerson & McKay, 201 1). As demonstrated by the previous articles, in today’s educational culture, the student is the key factor in not only determining the student’s success, but how the students evaluate a course.
This is further highlighted by the next article which is a study that examined student course evaluations in order discover what factors, if any, may affect a student’s evaluation of their course and instructor. The researchers used a program called sell to conduct surveys of students from 29 institutions across 1 1 different states. In total they surveyed 11,351 students. The survey consisted of 40 questions were composed to test eight different areas related to teaching including, but not limited to, Communication, Assignments and exams, and Course difficulty/ work load.
This experiment in particular was intended for maximum generalization, so he students and teachers were of varying ages and genders and from different courses across different disciplines (Lieu, 2012). The study concluded that in most area first- year students gave lower ratings than other students and that the female students tended to provide higher ratings than male in the Assignments and Grading portion. It also showed that, for the most part the instructors’ gender and academic rank have little if any effect of online students’ evaluation of their satisfaction.
The most significant data that was found in regards to the students’ evaluation of heir courses were that required courses were rated more difficult than electives and they also received lower ratings on Communication and Course materials (Lieu, 2012). Basically, many Of the bias factors that affect a course/ teacher evaluation for a conventional class are not applicable in online education situations. One problem with this study is that there are many people who do not like online education, and therefore this study does not affect them.
Also, it was noted that constant student surveys are still needed for efficiency. Basically, this experiment does not really affect anything; its just saying that a bad ours or teacher evaluation is probably not attributable to certain factors. According to the author there should be more research done in ways to make online educators more effective (Lieu, 2012). For a moment let us drift away from college online education and towards high school online education. There are different requirements for a high school diploma than there are for a college degree.
An example of this would be physical education requirements. Because many states have physical education requirements there are, and this sounds like an oxymoron, online physical education teachers. This next article is about a survey done of online hysterical education teachers (Adam, 2012). This survey was conducted with a very small sample of people. Of the 45 surveys, only 36 were completed, and four had to be discarded, which brings the sample to 32 completed surveys. The study was conducted in the United States and had participants in 14 states. Many participants were women.
Most had taught conventional physical education for at least seven years and had only been teaching online physical education for two years or less. The survey itself was carefully written and reviewed to ensure content validity. It was also electronic instead of the traditional mailed survey, because the artisans were online education teachers and should probably know how to work a computer (Adam, 2012). The results were that all of the participants had attended college and had either earned a Bachelors or Master’s degree, although the natures of the degrees were variable.
It was also found that 25% of the participants received no training in teaching an online course and that in the cases of those who had received training were just trained by someone who knew how to work the system, but were did not hold a degree for computer sciences or a related field. It was also found that few of the participants programs met the national acquirement and that several had difficulties with getting there students to complete the work on time. It was also noted that there were difficulties in motivating the students and making sure the students actually did the activity (Adam, 2012).
The weaknesses of this survey were that there was some confusion about two questions and, as noted, a very limited sample size. There needs to be more research done to find ways to improve the online physical education classes and to ensure that the students actually complete their work (Adam, And now back to college education. This next article has two hypotheses. First, that student involvement predicts the outcome of their online courses, and second, that there is a relationship between a student’s involvement in their online courses and how connected they feel to the institution they are attending (Shin & Chain, 2004).
The participants of this study were 285 distance education students of the Open Leistering of Hong Kong and were enrolled in one of four courses at the School of Business and Administration. These participants were given a survey that utilized the Liker scale and asked questions about student satisfaction, instructor availability, how often the student logged onto the ours site, and how likely the student was to continue with their education (Shin & Chain, 2004). The researchers found that their secondary hypothesis was not supported for students who used the course website, but were not required to.
However, they did find that there are significant relationships between the students connectivity to the university and the other areas of the study, in particular, student satisfaction. The data for students required to use the course website were different, but similar (Shin & Chain, 2004). According to the author of the article, it is difficult to determine which group’s data is more reliable. It is also debatable whether or not log-in frequency has an effect on students’ education, because they were simply measuring how often a student logs on not for how long.
There was also a difference in students’ educational backgrounds in relation to technology. The author believes that feeling connected to the university is important to a student’s education and more research should be done to create new ways to foster the feeling (Shin & Chain, 2004). And now, for something completely different, here is a review of an article about “free” online classes and “the economic implications as educational institutions expand online learning initiatives” (Cushman, 2013). This article is not about an experiment, but it examines the effect of massive open online courses and how they may affect the education community.
The author is a professor from Mitt’s School of Management. In the article he explains why these ‘free’ online classes are potentially dangerous. In the article the author gives examples of instances when free online products actually harmed their more traditional counter parts. One such example is the encyclopedias. Because of Wisped fewer people are buying encyclopedias; this has caused overall businesses to close. And in recent years more people have been using Wisped, but the contributions to Wisped have been declining (Cushman, 2013).
The author also states that he did research and found that “about two-thirds of the public software product companies existing nil 998 disappeared by 2006” (Cushman, 2013). In the conclusion of this article the author explains that, in essence, his article is about how free classes can impact our educational system both positively and negatively. They can force a decrease in the cost of tuition, but they could also bring about the closure f smaller, “less-prominent educational institutions” (Cushman, 2013). Conclusion Online education and conventional education are very different from each other.
There teaching styles are different, the students are different, and the socioeconomic impacts are different. But, is one better than the other? So far there are very few studies that can say anything for certain and many of the ones that do contradict each other. So far, the only thing we can say is that it doesn’t seem to be the education that is provided that is at fault for the grades of online education students, but rather the students themselves. We need to do more research into ways to improve online education and to improve the students Of online education.