Abstract
In response to calls for engineering programs to better prepare students for future careers, many institutions offer courses with a design component to first-year engineering students. This work proposes that traditional exam-based assessments of design concepts are inadequate, and alternative forms of assessment are needed to assess student learning in design courses. This paper investigates the self-efficacy differences between a traditional exam and a two-part practicum as a mid-semester assessment for introductory engineering students enrolled in a first-year design course. Increased self-efficacy has been linked to various positive student outcomes and increased retention of underrepresented students. The practicum consisted of an in-class team design task and an out-of-class individual reflection, while the exam was a traditional, individual written exam. All students completed a pre-assessment survey and a post-assessment survey, both of which included measures of design self-efficacy. Analysis showed that the practicum increased the design self-efficacy of students more effectively than the exam. Students who identified as women had greater gains in design self-efficacy during the practicum as compared with men. Identifying as a minority subgroup student was also trending toward being a significant predictor of change in design self-efficacy for the practicum. Findings suggest that a mid-semester practicum is a successful assessment of design competencies that contributes to increased first-year engineering student self-efficacy.
Introduction
Many first-year engineering design courses are taught using a project-based learning (PBL) approach, the theory behind which posits that students learn more through experiential, hands-on, and open-ended challenges than through rote coursework [1]. PBL structured first-year engineering design courses have been traditionally difficult to assess due to the subjective nature of the material [2] even though they offer many benefits to students, instructors, and institutions. For instance, Abdulaal et al. [3] contend that PBL is one of the best ways to teach design because it is correlated with high student achievement and satisfaction. Furthermore, first-year students self-report many positive benefits after completing a team based PBL design course [4–6]. Both design and problem-solving based projects have been taught successfully using PBL in a variety of class structures including case studies, single project, and multi-project structures [7]. PBL can also be easily adjusted to meet the requirements of the university, program, instructor, and individual students on a per-semester basis [8–10].
While PBL and other experiential pedagogical strategies for engineering are widely accepted in the design and engineering communities, implementation of these courses is not without challenges [7,9]. Disadvantages include costs in terms of resources, instructor preparation time, and smaller classes [7,9]. PBL is most successfully taught with smaller class sizes, which requires more instructors [7,9]. It has also been suggested that PBL design courses require a large number of faculty who are interested and capable of teaching design, a resource which may not exist within all engineering departments [7]. While the costs might be offset by the retention of more engineering students, semester-long PBL courses may not be feasible for all institutions [7]. At institutions where a semester-long PBL course is impractical, a practica could be implemented. Design practica are typically structured as open-ended design problems with a hands-on component and are limited in duration, lasting from an hour to a few days. Practica can offer the benefits of a semester-long PBL course but at a lower cost because practica share many of the same core principles as PBL and are shorter in duration.
The rapid increase in PBL adoption was largely driven by the perceived misalignment between engineering educational practices and the needs of 21st century engineering [11–13]. For example, Tryggvason and Apelian [11] argue that engineering education needs modification and revision because technology has changed the engineering environment. As industry and engineering practice evolve within a shifting technological landscape, it is paramount that the education and training of engineering students keep pace [12,14]. For example, technology has facilitated a drastic increase in the globalization of businesses and engineering graduates must be able to navigate that environment [11]. Engineering employers want graduates to have technical breadth, good theoretical understanding, and practical application skills [13]. In addition to technical skills, it has been suggested that the modern engineer requires innovation [12,13] and creativity skills [12,13], along with strong social [12,13] and interpersonal skills [13,14]. Engineering graduates must be able to communicate effectively [12,13], successfully work in teams [12,13], have business skills beyond commercial sensitivity [13], and be able to manage ambiguity and uncertainty [14,15]. As part of this movement for engineering education revision, some colleges now offer hands-on first-year engineering design courses to help satisfy engineering industry expectations and meet the needs of students.
This work specifically focuses on the assessment of first-year engineering design courses, which have a broad variety of purposes. Primarily, the intent of these courses is to provide students with a realistic perspective on engineering practice via hands-on project-based courses early in their academic careers [3,8,16–19]. However, these courses are also often used to raise enthusiasm for engineering [8,10,20–22], create motivation for students to persist in their major [21], and increase students’ confidence [3,16,17,22]. First-year design course are not only well intended but also result in many benefits to the students. Sheppard and Jenison [16] noted that almost all first-year engineering design courses exampled in their paper helped to improve the communication skills and teamwork proficiencies of students. Similarly, other researchers found that verbal, visual, and written communication had been improved by first-year design courses [3,7,20]. Fundamental engineering skills including mathematical modeling [3,16], creativity [23,24], problem-solving [9,10,20], critical thinking abilities [10,22,25], and prototyping were also improved by students [16,18]. Moreover, students developed an awareness of local and global challenges and a fervor for lifelong learning [3]. Furthermore, particular skills can also be cultivated when design courses have specific foci. For example, students became more reflective [26,27], had an increased awareness of the social impact of design [7], and had a perceived increase in their ability to solve multidisciplinary problems when special emphasis was put on these topics [28].
Not only do these courses benefit students but universities and programs also gain from offering such courses. Researchers determined that problem-based first-year engineering courses improved student retention in engineering, with significant gains in the retention of women and minority status engineers, and they found enhanced student satisfaction, diversity, and student learning [3,7,22,29]. These results are valuable and pertinent because other research has shown that engineering students’ motivational constructs decrease throughout the first year and first-year women engineers have more negative attitudes toward engineering when compared with men [30,31]. Due to the benefits of PBL-based first-year engineering design courses, it is important to improve assessment techniques in these course so that instructors can adequately and accurately evaluate student learning, while also building a better understanding of the impact of such assessments. Improved assessment methods will allow instructors to more easily evaluate students’ progress and improve their PBL-based first-year engineering design courses.
Literature Review
The study presented in this work will examine two types of assessments used in one PBL structured first-year engineering design course. Different types of assessments were explored due to the difficulty of assessing subjective skills, like engineering design, using traditional exam-based methods [2]. For the remainder of this paper, the term PBL will refer to the style of PBL used for semester-long engineering courses, while practica will refer to an assessment structured to reflect aspects of PBL.
Due to the aforementioned benefits of PBL, we hypothesized that a PBL-based practicum assessment would better align with the philosophy of the course and increase student self-efficacy when compared with a traditional written exam. This hypothesis is supported by the results of Stolk et al. [32], in which they conclude that students had more positive intrinsic motivation when participating in non-traditional (e.g., project based) or mixed pedagogy STEM courses, which is partially attributed to their basic self-efficacy need being satisfied. This paper will review literature on the assessment of design courses and student self-efficacy related to design before detailing the methods, results, and implications of this particular study.
Assessment of Design.
This study was conducted at a large mid-Atlantic university within a large cornerstone engineering design course for first-year students [33]. The course had conventionally been assessed using exams and as such, the alternative form of assessment being explored, the practicum, was compared with the exam. The use of written exams has traditionally been prevalent in engineering education as a method for evaluating the engineering competencies of students [34]. Rompelman [34] stated that written examinations are a part of the engineering educational culture. Exams became popular because, when designed well, they easily assess whether students have achieved predetermined educational goals [34]. Moreover, exams are considered “objective,” controlled, quantitative, and standardized [2,34]. Though some argue that engineering design can be assessed using exams [35], exams have disadvantages, which tend to be magnified within engineering project-based courses [2]; traditional written exams are prone to being inadequate for evaluating subjective skills, like design, and do not often predict students’ design performance well [2]. For example, subjective grading is normally conducted for design because design lacks conclusive experiments and design tasks rarely have right or wrong answers [2]. Due to this, educators are striving to develop assessment techniques for engineering design that evaluate engineering design skills more effectively while retaining the advantages that exams offer [2,36,37].
Some of these alternative assessment techniques include design juries, peer evaluations, and journals [10]. While instructors may use papers or presentations to assess students [38], some engineering programs have opted for a practica structure when assessing first-year engineering design courses. Commonly, practicum assessments are designed as in-class assessments (with a duration of a few hours), but they can also be developed as take-home exams (i.e., tens of hours to days). Practicum assessments align well with a PBL teaching style and they have more shared aspects with an exam than papers or presentations do (e.g., controlled). In a practicum, students demonstrate their mastery of material by solving an open-ended and realistic design problem, rather than solving a paper-and-pencil test [39]. For instance, students may be tasked with designing a solution for overcrowded animal shelters or finding a way to increase access to medical care in rural areas. These assessments provide an engaging experience for students and afford students a glimpse of what real-world engineering work may be like, which is unachievable with traditional exams. Practicum-style assessments align particularly well with courses that utilize a PBL format since they rely on many of the same core principles as a semester-long PBL course (e.g., emphasis on hands-on and open-ended problems). When used with traditional lecture-based courses, adding a practicum assessment incorporates a beneficial hands-on component, and when used within a design course, the use of practica better aligns the method of assessment with learning theory [39].
While the benefit of first-year engineering design programs is well established [7,40,41], the extent to which the assessment style used in first-year design classes effects student's psychological outcomes—particularly self-efficacy—has yet to be explored. Without appropriate assessment techniques, it is difficult for educators to evaluate the strength of the program or curriculum.
Self-Efficacy Theory Applied to First-Year Design.
Self-efficacy is defined as the confidence a person has in their ability to complete a prospective skill-specific task [42] and is affected by vicarious experiences, social persuasion, and physiological state [42–44]. The construct of self-efficacy has been found to influence students’ motivation [45,46], outcome expectancy [46], anxiety [46], persistence [47], grades/GPA [45,48], mathematical achievement scores [48], and perceived career options [47]. For first-year students, academic self-efficacy correlates with students’ classroom performance, stress, health, satisfaction, and commitment to continue their education [49]. In engineering, self-efficacy theory has been applied to predict retention and persistence of various groups of students [50]; for example, self-efficacy is a predictor of women's persistence in a field traditionally dominated by men, like engineering [51,52].
Competency-specific self-efficacy measures have also been created in engineering, such as writing self-efficacy [53,54] and design self-efficacy [46]. The measure for writing self-efficacy has been used to better understand how engineering graduate students conceptualize and relate to the writing process [54]. Furthermore, this measure has been used to show that the likelihood that a graduate student will pursue a certain engineering career after graduation and writing self-efficacy are linked [53]. Self-efficacy in design tasks can best be evaluated using a scale developed and validated by Carberry et al. [46], which measures a respondent's design self-efficacy, motivation, outcome expectancy, and anxiety throughout the design process. The design self-efficacy measure developed by Carberry et al. [46] has mainly been used in engineering education to study students’ design self-efficacy. For example, one recent study found that students who were more involved in academic makerspaces demonstrated greater engineering design self-efficacy [55].
Bandura [43,44] notes that mastery experiences are critical to the development of self-efficacy beliefs [56], and prior work has linked mastery experiences with increased self-efficacy in first-year engineering students. Mastery experiences have previously been found to have the greatest influence on self-efficacy but contextual factors can mitigate the strength of the effect (e.g., gender and ethnicity) [57,58]. As such, this study explores whether an authentic practicum assessment (serving as a mastery experience) effects students’ design self-efficacy. While it cannot be guaranteed that students will perceive the practicum assessment as a mastery experience, the practicum is intentionally designed to be conducive to a mastery experience and is thus more likely to be perceived as such by students. Here, we use the term authentic to denote an educational experience which emulates attributes of an experience common in industrial practice [59]. We posit that authentic design practica serve as more effective mastery experiences for first-year students as compared to traditional written exams. Measuring change in self-efficacy as a result of assessment gives unique insight into the development of students’ skills and can point toward best practices in assessments of PBL courses, such as first-year design.
Research Questions
As first-year design courses become more prevalent, it becomes increasingly important for educators to identify a formal assessment method that is both capable of evaluating subjective design skills and serves as an exciting, empowering introduction to engineering for students from all backgrounds. Specifically, this paper addresses two research questions:
Is there a difference in the self-efficacy change experienced by students who take exams and those who take practica?
If changes in student self-efficacy are observed, are they influenced or dependent on the demographic traits of the students?
To address these questions, this paper compares student self-efficacy differences between first-year engineering students who were assessed via a traditional exam or an authentic design practicum developed at a large mid-Atlantic university. Both assessments were used for summative mid-semester assessment. The practicum is compared with the exam because the exam was the previous standard of assessment for the course in which this study took place. It is likely that a practicum structured with the same principles used in PBL would have similar positive psychological effects unlike a non-project-based assessment (e.g., exams). We will discuss the impact of practica-based assessments on first-year engineering student design self-efficacy in comparison with traditional paper exams.
Methods
An exam and practicum were developed for introductory engineering design courses as summative mid-semester assessments at a large mid-Atlantic university. Both assessments were designed to assess core design competencies covered in the course. A total of 46 students completed the exam and 50 students participated in the practicum for this institutional review board (IRB) approved study. All students in the course completed the associated data collection instruments but only the data for the students who consented to the study is presented here. Students were informed before consenting to the study that their participation (or choice not to participate) in the study would not impact their grade. Demographic information for the participants is provided in Table 1. The demographic make-up between the two groups (i.e., the exam and the practicum) is not balanced because separate course sections received the two assessments.
Age (years) | Practicum | Exam |
---|---|---|
17 | 0 | 4 |
18 | 24 | 30 |
19 | 23 | 7 |
20 | 2 | 3 |
21 | 1 | 0 |
25 | 0 | 1 |
32 | 0 | 1 |
Gender | Practicum | Exam |
Male | 28 | 38 |
Female | 22 | 8 |
Race | Practicum | Exam |
White | 34 | 28 |
Middle Eastern or Native African | 5 | 1 |
Black or African American | 1 | 1 |
Asian | 6 | 6 |
Hispanic, Latino, or Spanish Origin | 0 | 2 |
Multiple selections | 4 | 8 |
Age (years) | Practicum | Exam |
---|---|---|
17 | 0 | 4 |
18 | 24 | 30 |
19 | 23 | 7 |
20 | 2 | 3 |
21 | 1 | 0 |
25 | 0 | 1 |
32 | 0 | 1 |
Gender | Practicum | Exam |
Male | 28 | 38 |
Female | 22 | 8 |
Race | Practicum | Exam |
White | 34 | 28 |
Middle Eastern or Native African | 5 | 1 |
Black or African American | 1 | 1 |
Asian | 6 | 6 |
Hispanic, Latino, or Spanish Origin | 0 | 2 |
Multiple selections | 4 | 8 |
Both the exam and the practicum were intended to serve as mid-semester formative assessments. Specifically, these evaluations were intended to assess student's knowledge of, and ability to apply, the design process and tools that they were taught earlier in the course. Learning objectives of this course included the ability to apply engineering design to address design opportunities; the development, use, and application of systems thinking to engineering design; the development of professional engineering skills; the ability to effectively communicate engineering concepts and designs; and providing the opportunity for students to gain experience in hands-on fabrication while developing a “maker” mindset. The exam was constructed to be 2 hours in length and consisted of multiple choice and short answer questions addressing various portions of the engineering design process. The questions in the exam easily assessed whether students were able to apply engineering design to various design opportunities or if they had developed a systems thinking mindset; however, the exam was less useful in determining if students could effectively communicate engineering concepts and did not provide hands-on experience. The exam was completed individually in-class. It was used in comparison with the practicum because it had been the previous assessment standard for the course in which this study took place.
In contrast, the practicum was designed to engage students in an authentic design task. The practicum involved an in-class team design activity approximately 2 hours in length and an out-of-class individual reflection (students who participated in the exam did not complete this reflection) completed within the 48 hours following the practicum. Students worked in teams to facilitate collaborative design. Teamwork is considered a critical skill in engineers and the use of teamwork reinforced for students that it is uncommon for designers to work alone in industry [16,59–63]. The structure of the practicum allowed for the instructors to more comprehensively assess if students were meeting all of the learning objectives of the course and also offered an additional opportunity for hands-on fabrication.
Students who were assessed via the practicum were directed to design a product or service for the residents of Cape Town, South Africa that would aid the residents in their water conservation efforts by using the design processes, methods, and tools previously taught in class. The prompt the students who participated in the practicum were given is as follows:
“Cape Town, a coastal city in South Africa, is running out of water. This is the result of a combination of factors, including thriving agriculture, a large population, and a regional drought. They estimate that by July 2018 there will be no more fresh water in their reservoirs. The government of South Africa has declared this a national disaster. It is obvious that Cape Town is in need of long-term solutions that will allow the large population to live sustainably. However, your task is much more short-term in nature. Design a product or service that will aid in water conservation efforts, helping the residents of Cape Town to extend their available water as long as possible”
By having students design a product with social impact for a real-life crisis, this prompt incorporated the use of global perspectives and provided students with an empathetic engineering experience [64–66]. Students were given 48 hours after completing the in-class activity (i.e., the 2 hour team practicum) to submit an individual reflection, in which they considered the design process used by their team, discussed any gaps in this process, and analyzed decisions made during the design process. This style of reflection encouraged desirable learning outcomes by prompting metacognition [67]. The reflection also ensured that each student completed a deliverable as part of the assessment and aided the instructor in individually assessing each student's learning.
The experimental procedure is summarized in Fig. 1. Students completed the pre-assessment survey in class before completing the exam or participating in the practicum. The pre-assessment survey consisted of queries regarding age, gender, race/ethnicity, and degree of preparation for the assessment. The engineering design self-efficacy instrument developed by Carberry et al. [46] was also included in the pre-assessment. This instrument asks participants to rate their confidence in their ability to complete each of nine common tasks associated with engineering design, such as “construct a prototype,” on a scale from 0 to 100 [46].
The post-assessment survey was taken by the exam students within 48 hours of completing the exam and the practicum students took it after completing their individual reflections (∼48 hours after completing the in-class task). This post-assessment survey included (1) the original Carberry et al. [46] engineering design self-efficacy instrument, and (2) a modified version of the engineering design self-efficacy instrument which requested that students rate the change in their confidence in their ability to complete each of nine engineering design tasks. The modified instrument used a scale from −50 (indicating a decrease) to 50 (indicating an increase). This enabled the evaluation of students’ perceived change in self-efficacy. For this study, computed self-efficacy will be used to refer to self-efficacy measures reported through the Carberry et al. [46] survey and perceived self-efficacy will refer to the measures taken from the supplemental survey in which students rated the perceived change in their degree of confidence for each of the self-efficacy measures.
Both assessments were delivered across two sections of the course taught by different instructors. Instructor A identifies as a white man, and, at the time of the exam and practicum, he was 28 years old. Instructor B identifies as a white woman and, at the time of the exam and practicum, she was 27 years old. The exam was used by both instructors during the initial semester and the practicum was administered by both instructors in the following semester.
Results
Responses (0–100) to the Carberry et al. [46] survey were summed for each student to determine an overall computed design self-efficacy score (maximum score = 900). Pre-assessment computed design self-efficacy scores (Carberry et al. [46] survey) were compared for the practicum and the exam to ensure that there were no initial differences between the two groups of students using an independent two-group Mann–Whitney U test (Z = −1.794, p > 0.05, r = 0.18). The pre-assessment and post-assessment computed design self-efficacy scores were compared using dependent two-group Wilcoxon sign rank tests for the exam and for the practicum separately (Fig. 2). There was no significant difference between pre- to post-assessment computed design self-efficacy scores for students who took the exam, Z = −0.127, p > 0.05, r = 0.019, but a significant difference was found from pre- to post-assessment for the students who participated in the practicum, Z = −3.514, p < 0.001, r = 0.497. Students who took the exam had no change in computed self-efficacy, but students who took the practicum experienced an increase in computed design self-efficacy.
The Pearson's product-moment correlation between perceived change in design self-efficacy and computed change in design self-efficacy was determined for both the practicum and the exam, separately. Change in computed design self-efficacy was the difference between pre- and post-assessment scores for the design self-efficacy instrument (i.e., the Carberry et al. [46] survey). While the score for perceived change in design self-efficacy was determined by summing the responses (−50 to 50) from the modified version of the design self-efficacy instrument for each student (maximum score = +450). For the exam, it was found the perceived change in design self-efficacy was not correlated to the computed change in design self-efficacy, t(44) = 1.172, p > 0.05. Conversely, the perceived change in design self-efficacy for the practicum was correlated to computed change in self-efficacy for the practicum, t(48) = 3.253, p < 0.01. Perceived change in design self-efficacy was not correlated to computed change in design self-efficacy for the exam but was correlated for the practicum.
Independent two-group Mann–Whitney U tests were used to compare the perceived change in design self-efficacy for the exam and practicum students and the change in computed self-efficacy for the exam and practicum students. Results are provided in Fig. 3. There was a significant difference between perceived change in self-efficacy when comparing the exam and practicum students’ reports, Z = −6.467, p < 0.001, r = 0.660. The perceived change in design self-efficacy was higher for the practicum students. There was also a significant difference between the change in computed design self-efficacy for the practicum and the exam students (Z = −2.496, p < 0.05, r = 0.255). There was a greater increase in computed design self-efficacy for students who took the practicum compared with students who took the exam. Students who took the practicum had a greater increase in both computed and perceived changes in design self-efficacy when compared to students who took the exam.
Linear regression models were used to find predictors of perceived and computed design self-efficacy for the exam and the practicum (Fig. 4). For students taking the exam, the gender of the students and instructor were not predictors of computed design self-efficacy (F(3,42) = 1.317, p > 0.05). This model for the exam only predicted 2% (adjusted R2) of the variance in the change in computed design self-efficacy scores. However, for students who took the exam, gender was a statistically significant predictor of perceived change in design self-efficacy (F(3,42) = 8.75, p < 0.001) and there was also a significant interaction of gender and instructor. This model for the exam predicted 34% (adjusted R2) of the variance in perceived change in design self-efficacy; women perceived greater gains in design self-efficacy. The women students in Instructor A's course reported greater gains in perceived design self-efficacy, while the men students in Instructor B's course reported greater gains in perceived design self-efficacy. The instructor and the gender of the students were not good predictors of change in computed self-efficacy for students who took the exam. However, gender was a good predictor of perceived change in self-efficacy for students who took the exam (women had greater gains in self-efficacy) and there was an interaction effect of gender of the student and instructor for perceived change in self-efficacy.
For students who took the practicum, gender was not a statistically significant predictor of change in computed (F(3,46) = 2.117, p > 0.05) design self-efficacy but it was a significant predictor of change in perceived (F(3,46) = 3.513, p < 0.05) design self-efficacy. This practicum model for change in computed self-efficacy predicted 6% (adjusted R2) of the variance and the practicum model for perceived change in self-efficacy predicted 13% (adjusted R2) of the variance. Figure 4 shows that women have greater perceived changes in design self-efficacy for the practicum. An independent two-group Mann–Whitney U test was used to compare the perceived design self-efficacy gains by women for the exam and the practicum (Z = −2.679, p < 0.001, r = 0.273), and results show that women in the practicum had significantly higher gains in perceived design self-efficacy compared to women who took the exam. Gender was not a predictor of change in computed design self-efficacy, but it was a good predictor of change in perceived design self-efficacy. It was found that women who participated in the practicum had greater gains in perceived design self-efficacy than women who took the exam.
Linear regression models were also used to investigate if race/ethnicity was a predictor of design self-efficacy for students who participated in the practicum and the exam. A relatively small sample of students identified as minorities for both the practicum and the exam (see Table 1). It was found that when minority subgroups were collapsed, a minority declaration was not a statically significant predictor of computed (F(1,48) = 2.331, p > 0.05, adjusted R2 = 3%) or perceived (F(1,48) = 0.653, p > 0.05, adjusted R2 = 1%) change in design self-efficacy for the practicum. However, it is believed that the change in computed design self-efficacy (ƒ2 = 0.049) is trending toward significance for the practicum; identifying as a minority may predict greater gains in self-efficacy. It is likely that the effect would be significant if there were more minority status students in the sample as indicated by the small effect size. Collapsed minority declaration was also not a significant predictor of computed (F(1,44) = 0.146, p > 0.05, adjusted R2 = −2%) or perceived (F(1,44) = 0.342, p > 0.05, adjusted R2 = −1%) change in design self-efficacy for students who took the exam. Minority status was not a good predictor of perceived or computed change in self-efficacy for students who took the exam. It was also not a good predictor of perceived change in self-efficacy for students who participated in the practicum. However, it is trending toward significance for gains in computed change in self-efficacy for minoritized students who participated in the practicum.
When white and minority subgroups that consisted of at least 10% (Asian and Middle Eastern/Native African for the practicum; Asian for the exam) of the total participants were used as predictors, it was found that for the practicum (see Fig. 5), declarations of white or Middle Eastern/Native African were statistically significant predictors of change in computed design self-efficacy, F(3,46) = 3.095, p < 0.05, adjusted R2 = 11%, and a status of Asian was trending towards significance (d = 0.261). There were no significant race/ethnicity predictors of perceived change in design self-efficacy for the practicum when another linear regression model was used, F(3,46) = 0.138, p > 0.05, adjusted R2 = 1%. Subgroup results for race/ethnicity are provided in Fig. 5 for the practicum. There were no significant predictors of computed (F(2,43) = 0.410, p > 0.05, adjusted R2 = −3%) or perceived (F(2,43) = 0.001, p > 0.05, adjusted R2 = −5%) change in design self-efficacy for the exam when minority subgroups were investigated. Minority identity was not a good predictor of change in perceived or computed self-efficacy for students who took the exam or perceived change in self-efficacy for students who participated in the practicum. However, identifying as a minority subgroup was a good predictor of change in computed self-efficacy for students who participated in the practicum.
Discussion
All main results for this study are summarized in Table 2. When determining if there was a difference in the self-efficacy change experienced by students who take exams and those who take practicum (i.e., the first research question), this study found that students’ design self-efficacy did not increase from pre- to post-assessment when they took a traditional exam-based assessment. However, an increase in design self-efficacy was observed in students who took a two-part practicum. The practicum included a team design activity and individual reflection. It was also found that students who had participated in the practicum had much greater increases in computed and perceived changes in design self-efficacy when compared with students who took the exam. These results indicate that the design practicum was a more effective assessment tool for increasing students’ design self-efficacy than a traditional exam. This conclusion is further supported by perceived change in design self-efficacy being correlated to computed change in design self-efficacy for the practicum but not for the exam. As expected, these findings align well with work by Stolk et al. [32] which concluded that non-traditional STEM courses increase intrinsic motivation of students likely due in part to high student self-efficacy in these courses.
Test | Assessment | Result |
---|---|---|
Pre- to post-assessment computed DSE (Fig. 2) | Exam | No significant differences from pre- to post-assessment. |
Practicum | Significant increases from pre- to post-assessment. | |
Test | DSE | Results |
Change in DSE (Fig. 3) | Perceived | Practicum students had significantly greater increases when compared with exam students. |
Computed | Practicum students had significantly greater increases when compared with exam students. | |
Test | Predictor | Results |
Gender and instructor as preditors of design self-efficacy (Fig. 4) | Gender | Exam: gender was a significant predictor of perceived change in DSE but did not predict computed change in DSE. |
Practicum: gender was a significant predictor of perceived change in DSE but did not predict computed change in DSE. | ||
Instructor | Exam: instructor was not a significant predictor for computed or perceived DSE. | |
Practicum: instructor was not a significant predictor for computed or perceived change in DSE. | ||
Interaction | Exam: there was a significant interaction of gender and instructor for perceived change in DSE but not for computed change in DSE. | |
Practicum: there was no interaction effect for computed or percieved change in DSE. | ||
Test | DSE | Results |
Collapsed minority declaration | Exam | Not a significant preditor of perceived or computed change in DSE. |
Practicum | Not a significant predictor of perceived change in DSE but may be trending toward a significant predictor of computed change in DSE. | |
Minority subgroup declaration (Fig. 5) | Exam | Not a significant predictor of percieved or computed change in DSE. |
Practicum | Declarations of white or Middle Eastern/Native African were statistically significant predictors and a decalartion of Asian was trending toward being a significant predictor of change in computed DSE. No minority subgroups were significant predictors of perceived change in DSE. |
Test | Assessment | Result |
---|---|---|
Pre- to post-assessment computed DSE (Fig. 2) | Exam | No significant differences from pre- to post-assessment. |
Practicum | Significant increases from pre- to post-assessment. | |
Test | DSE | Results |
Change in DSE (Fig. 3) | Perceived | Practicum students had significantly greater increases when compared with exam students. |
Computed | Practicum students had significantly greater increases when compared with exam students. | |
Test | Predictor | Results |
Gender and instructor as preditors of design self-efficacy (Fig. 4) | Gender | Exam: gender was a significant predictor of perceived change in DSE but did not predict computed change in DSE. |
Practicum: gender was a significant predictor of perceived change in DSE but did not predict computed change in DSE. | ||
Instructor | Exam: instructor was not a significant predictor for computed or perceived DSE. | |
Practicum: instructor was not a significant predictor for computed or perceived change in DSE. | ||
Interaction | Exam: there was a significant interaction of gender and instructor for perceived change in DSE but not for computed change in DSE. | |
Practicum: there was no interaction effect for computed or percieved change in DSE. | ||
Test | DSE | Results |
Collapsed minority declaration | Exam | Not a significant preditor of perceived or computed change in DSE. |
Practicum | Not a significant predictor of perceived change in DSE but may be trending toward a significant predictor of computed change in DSE. | |
Minority subgroup declaration (Fig. 5) | Exam | Not a significant predictor of percieved or computed change in DSE. |
Practicum | Declarations of white or Middle Eastern/Native African were statistically significant predictors and a decalartion of Asian was trending toward being a significant predictor of change in computed DSE. No minority subgroups were significant predictors of perceived change in DSE. |
Note: DSE refers to design self-efficacy.
Although the primary purpose of an evaluation is not to increase self-efficacy, it is an added benefit with substantial effects. The increase in self-efficacy is likely due to the same positive effects of mastery experiences (i.e., authentic successes) as denoted in prior literature [7,46–49,51,58]. This is important as design practica provide educators with a way to not only assess design curriculum and student learning, but contribute to the formation of engineering identities and could possibly lead to a higher retention of engineering students [50,52]. An assessment that increases self-efficacy better aligns with the goals and purpose of a first-year design courses, mainly to increase motivation and enthusiasm for engineering [3,8,16–18,20,22].
The remainder of the discussion considers findings relevant to research question number two which focused on the influence of demographic traits on changes in self-efficacy. This study found that the change in computed design self-efficacy for the exam was not influenced by gender or instructor. However, perceived change in design self-efficacy for the exam was influenced by gender. Women perceived greater increases in design self-efficacy when compared with men. Increases in women students’ self-efficacy could increase the number of women planning to persist in traditionally male-dominated fields like engineering [51,52]. Taking the exam may have allowed students to more accurately perceive their self-efficacy than they could before the exam [68,69]. In other words, students may have become aware of how much or how little they actually knew about the concepts presented. It is possible that women perceived a greater increase in design self-efficacy because they were more able to accurately access their self-efficacy after the exam. However, it is unknown whether women had lower perceived self-efficacy before the exam because the instrument only asks students to rate their perceived change in design self-efficacy. It is unlikely though that women had lower perceived self-efficacy before the exam because no differences by gender are seen for pre-survey computed self-efficacy scores. Nevertheless, future studies should determine if women students have lower perceived design self-efficacy throughout the course, including before the mid-semester assessment. Previous literature has demonstrated that incoming women students perceive themselves as academically weaker when compared with the perceptions of men students, even though their academic performances were similar [70].
There was also an interaction of gender and instructor for perceived change in design self-efficacy for the exam. Instructor A identifies as a white man and Instructor B identifies as a white woman. The women students in Instructor A's course compared with the men students in Instructor A's course reported greater gains in perceived design self-efficacy. On the contrary, the men students in Instructor B's course compared with the women students in Instructor B's course reported greater gains in perceived design self-efficacy. One possible explanation could be that the students in each class significantly differed in their pre-assessment self-efficacy scores by gender and therefore had different opportunities for gains in self-efficacy. When this possibility was investigated, it was found that there were no pre-survey differences in design self-efficacy for either course by gender, thus is unlikely that students had different opportunities for gains in self-efficacy. It is more likely that differences between Instructor A and Instructor B contributed to this effect. Many aspects of teaching method, teaching style, and instructional strategy have been linked to influencing student self-efficacy [58,71], one of which may also explain this interaction. Due to the considerable differences between Instructor A and Instructor B, no other conclusions could be made to explain the cause of the interaction effect.
For the practicum, gender only influenced perceived change in design self-efficacy and did not influence change in computed design self-efficacy. It was found that women had greater perceived gains in design self-efficacy for the practicum when compared to men students. This may, again, be due to women perceiving themselves as academically weaker before the assessment and more accurately assessing themselves after the assessment, but it is also possible that the mastery experience has a greater influence on women self-efficacy compared with men self-efficacy [57,58,70]. Mastery experiences in historically male disciplines have been found to influence women self-efficacy differently than men self-efficacy [57]. Women also had higher perceived gains in design self-efficacy for the practicum when compared with women who took the exam, most likely due to the mastery experience of the practicum being more influential on self-efficacy than an exam. This is an important finding because it shows that a practicum-style assessment has the ability to increase the self-efficacy of underrepresented groups in engineering.
In addition, it was determined that identifying as certain races/ethnicities can indicate change in computed design self-efficacy scores for the practicum. Furthermore, identifying as a minority subgroup student was also trending toward being a significant predictor of change in computed self-efficacy for the practicum and not for the exam. Previous research suggests that aspects of a mastery experience can have a greater influence on the self-efficacy of certain ethnic groups (e.g., some ethnic groups are influenced by vicarious experience more than others) [57]. It has also been shown that the confidence of ethnic group members develops through different factors than the confidence of white students (e.g., other-orientated versus self-orientated formation of confidence) [57]. Minority status was trending toward significance for the practicum while minority status was not significant for the exam. This could suggest that the team aspect of the practicum caused social persuasion and vicarious experience to contribute to minority status students’ self-efficacy. Previous research has found that minoritized students appreciated learning from and benefited from their teammates during design scenarios [72]. This result highlights the ability of the practicum to further effect underrepresented groups in engineering.
Limitations and Recommendations for Future Research.
This study was constrained by a small and relatively homogeneous sample (in terms of racial diversity) due to the limited diversity of the institution. However, the results based on this sample are promising as they indicate that the practicum could be used to increase self-efficacy of women and may increase the self-efficacy of minority status students. Future work could expand the study to include additional universities and institutions with more diverse student populations to better understand in what environment such an intervention would be the most beneficial. The results also suggest that teaching style or the positionality of the instructor may influence perceived self-efficacy gains for students taking traditional exams. However, because of the small number of instructors in this experiment, these results are anecdotal at best. A future replication study with a larger sample size, greater demographic diversity, and multiple class sections at multiple institutions is necessary to pursue this result further. It is also unknown if these gains in self-efficacy span long durations of time because there were no follow-up surveys distributed to participants. Understanding that our small sample only included two faculty instructors, a replication study should be designed to more fully investigate the effects of instructor identity on student performance and outcomes. Additionally, the replication study should address whether the effects seen from the practicum were due to the team design task or to the individual reflection.
Future work should conduct longitudinal studies with participants to explore if the differences in design self-efficacy are still prevalent after a significant period of time has passed. Furthermore, future work should investigate what parts of the practicum are most important for enabling students to achieve higher self-efficacy. Finally, future work should determine the effects of different types of practicum assessments on student self-efficacy and explore the effects of an individual style practicum rather than a team design task.
Conclusion
This study examined whether and to what extent assessments influence design self-efficacy by comparing a traditional exam and a practicum in a PBL design course for first-year engineering students. The effects of the students’ demographic traits on design self-efficacy for both types of assessments were also investigated. When analyzing pre-assessment and post-assessment measures of self-reported design self-efficacy, students who had participated in the practicum had the greatest gains in computed and perceived design self-efficacy when compared with students who had completed the exam. The practicum also increased women's design self-efficacy more effectively than it did men's design self-efficacy. This suggests that a practicum assessment is more effective in increasing the design self-efficacy of introductory students, especially for students from underrepresented groups, when compared with a traditional exam. An assessment which increases students’ self-efficacy better aligns with the goals and purpose of a first-year design course. Increasing self-efficacy in this way holds significant implications for the long-term retention of students in engineering degree programs.
Acknowledgment
An early version of this work was included in the proceedings of the 2020 ASME IDETC [73].
Funding Data
The first author's research assistantship is supported by the Defense Advanced Research Projects Agency through cooperative agreement N66001-17-4064. Any opinions, findings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the sponsor.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request.