How are GAI Tools Impacting College Educators?

5 MARCH 2024

By Stella Min and Veronica Woodlief

The Academic Data Science Alliance (ADSA) recently hosted a discussion about how generative AI (GAI) technologies like ChatGPT are impacting the way that postsecondary educators are thinking about designing curricula and assessing student competencies. The panel included Lance Eaton, Amanda Ellis, and Anna Mills. Learn more about each panelist and some key takeaways from the discussion below.

ADSA does not support or endorse any of the products mentioned in this blog post. The opinions and recommendations provided are solely those of the panelists involved.

Image of the three panelists' faces with the words Generative AI in the Classroom displayed.

 

Key Takeaways From the GAI in the Classroom Panel

How are GAI tools being used by college instructors?

Despite frequent media coverage of the proliferation of GAI tools in higher education, our panelists indicated that many instructors are not using GAI tools due to a lack of guidance from their institutions – or worse, ambiguous and confusing policies about GAI tools. According to Lance Eaton, many postsecondary educators are hesitant to use a GAI tool if it hasn’t been approved by their institution. Among the GAI tools that are being used, the panelists mentioned Canva, Grammarly, ChatGPT, and Microsoft Copilot.

Beyond institutional policies, many instructors are concerned about the potential drawbacks of incorporating GAI tools into assignments. Amanda Ellis, for example, acknowledged concerns about the potential for students to miss important steps in the process of learning. For both instructors and students, there’s also a concern that GAI tools compound existing pressures to increase productivity. Lance highlighted the risk of these pressures especially right now with high reports of burnout and elevated levels of quitting.

What about ethical concerns among students?

A common assumption is that students are comfortable using GAI tools, but that’s not always the case. Some students may prefer to avoid using tools that have been trained on data without consent from its creators. Anna Mills advocates for an approach that enables students to choose which tools are required in assignments. This approach is consistent with Katie Conrad’s Blueprint for an AI Bill of Rights for Education.

How do the challenges and opportunities concerning GAI differ between disciplines?

The potentials and challenges of GAI transcend disciplinary boundaries, a notion unanimously acknowledged by the panelists. They emphasized the pivotal role of educators in establishing clear guidelines for students navigating the uses of AI and an explanation of how these tools can aid and impede learning. By delineating the permissible applications of AI and nurturing a culture of transparency, students are encouraged to critically assess the methodologies and rationales behind their use of AI. For example, Anna Mills includes a "Tools and Rules" section in her assignments, recommending where AI tools might be suitable for enhancing learning and where they shouldn’t be used to bypass genuine learning. 

Beyond the curriculum, the panelists noted promising benefits of GAI such as enhancing accessibility for students with disabilities and facilitating the development of soft skills such as learning to write professional emails, improving time management, and improving note-taking. Positive benefits such as these, along with the potential downfalls of GAI tool use outlined above, illustrate the complexities of navigating and creating policies around GAI tools in the classroom.

How can faculty advocate for a collaborative approach to creating AI policies?

Anna served on the MLA-CCCC Task Force on Writing and AI. One of the recommendations from the Task Force was that higher education policymakers collaborate with faculty in the formation and evaluation of policies about AI rather than a top-down approach (see slide 23). When asked how faculty can advocate for a collaborative approach, Anna pointed to Lance’s work at College Unbound as an excellent model, which you can read about here. Recommendations include:

  • Encouraging participation from faculty across different departments. AI policies have implications beyond the classroom, affecting areas such as accessibility, marketing, and HR. By involving a diverse group of faculty members, staff, and other parts of the institution, you can ensure a comprehensive and nuanced approach to policy development.
  • Adopting a tiered approach to policy design. Begin with institution-wide policies that provide general guidelines, then develop discipline-specific policies tailored to the unique needs of different academic areas. Finally, allow room for instructor-specific policies to accommodate academic freedom while maintaining consistency.
  • Advocating for iterative policy design that allows for regular review, reflection, and revision. AI technologies and our understanding of them are constantly evolving, and this approach ensures that policies remain relevant and responsive to changing circumstances.
  • Involving students in the policy-making process. Engaging them in discussions about AI ethics, implications, and policy development helps prepare them to be future leaders in shaping AI policies.

Lance emphasized the latter point and recommended starting with student input first since they tend to be the most impacted by the school’s policies. He also emphasized the importance of engaging students with diverse needs and backgrounds during these discussions to minimize the risks of unintentional harm and that AI policies accommodate the needs of the entire institution.

Do you have concerns about intellectual property and the content being scraped by Generative AI tools?

The panelists expressed ambivalence on the issue of GAI tools and intellectual property (IP) rights. Lance, for example, openly expressed his disdain for the US’s current system for IP and hopes that these tools will allow us to create a more open system that encourages more sharing, but he also has concerns about how companies are profiting off of tools that are trained on people’s works without permission. Anna embraces principles of open science sharing resources freely and isn’t necessarily against people using her work to train AI models. Her concerns are about how many of these tools are not built to cite sources, which presents issues related to plagiarism. Similarly, Amanda largely believes that GAI tools can be beneficial but she also feels powerless when it comes to how foundational models are trained. Amanda also pointed out that, while she does not think that her work is immediately threatened by GAI tools, she acknowledges individuals in communities whose work is undermined by GAI, such as actors and artists.

How do you envision the state of generative AI tools in higher ed? What do you think those conversations might be like?

With that caveat most predictions are wrong, the panelists believe that the higher education community will likely still be discussing the (in)appropriate uses of GAI. Lance suggested that the pressure for productivity is unlikely to go away and there is a risk that the positive and negative impact of GAI tool use will disparately impact groups in ways we are all too familiar with. Amanda also expressed concerns about access and equity. Anna expressed hope that conversations about GAI will center on human learning and collaboration over surveillance.

Summary

Our panel provided several insights about the use of GAI tools by college educators and students, including inconsistent or insufficient institutional guidance and concerns about students’ learning processes. Ethical concerns such as intellectual property rights, data consent, and plagiarism were discussed as well. The panelists also shared their predictions for the future of GAI in higher ed and offered suggestions for a collaborative and iterative approach toward AI policy development with institution-wide input.  Watch the full recording on our YouTube channel to learn more.

 

About the Panelists

Lance Eaton

Lance Eaton is the Director of Faculty Development & Innovation at College Unbound, a part-time instructor at North Shore Community College, and a PhD student at the University of Massachusetts, Boston with a dissertation that focuses on how scholars engage in academic piracy. His work engages with the possibility of digital tools for expanding teaching and learning communities while considering the various deep issues and questions that educational technologies open up for students, faculty, and higher ed as a whole. He has given talks, written about, and presented at conferences on artificial intelligence generative tools in education, academic piracy, open access, OER, open pedagogy, hybrid flexible learning, and digital service learning.

Amanda Ellis

Amanda Ellis is the vice chair of the Department of Biostatistics and Director of Graduate Studies of the Master of Science in Biostatistics (MSBST) program at the University of Kentucky College of Public Health. Dr. Ellis joined the college as an Assistant Professor in 2020. Her focus is on graduate education, along with course and curriculum development. She earned her undergraduate degree in Mathematics from the University of Kentucky. She also earned her M.S. and Ph.D. in statistics from the University of Kentucky. She previously taught at Eastern Kentucky University, where she focused on undergraduate and graduate education.

Anna Mills

Anna Mills has taught community college English for 18 years and currently teaches at Cañada College. She serves on the MLA/CCCC Task Force on Writing and AI and curates an AI resource list for the Writing Across the Curriculum Clearinghouse. Anna has published an OER textbook, How Arguments Work, as well as essays on AI in The Chronicle of Higher Education and Inside Higher Ed.

 

Resources

Below, are links to content that was mentioned during the panel, along with resources that have been curated by ADSA Staff.

Looking for More Blog Posts?

Check out the ADSA Community Blog!