Answering Your AI Questions with Daniel Frank, PhD

MarisaBluestone
Community Manager
Community Manager
1 0 760

Daniel Frank, PhD, is one of three subject matter experts who contributed to the first course at the Institute at Macmillan Learning, "Teaching with Generative AI: A Course for Educators." The course integrates diverse perspectives into the discourse surrounding AI in education by blending asynchronous and synchronous learning. It offers practical experience in formulating AI-related course policies, designing AI-informed assignments, and fostering dialogues with students on AI applications.

Dr. Frank offers a unique perspective on AI in higher education, tackling three key questions from our AI webinar series last fall. Explore his background and insights on real queries from fellow professors for a closer look at the practical knowledge the Institute course will offer.

Daniel Frank, PhD, teaches First Year Composition, multimedia, and technical writing within the Writing Program at the University of California, Santa Barbara. His research interests include AI Writing technologies, game-based pedagogy, virtual text-spaces and interactive fiction, passionate affinity spaces, and connected learning. Dan is always interested in the ways that new technologies interface with the methods of making, communicating, learning, and playing that students are engaged with across digital ecosystems. His pedagogical focus is always rooted in helping students find their own voices and passions as they learn to create, play, and communicate research, argumentation, and writing, across genres, networks, and digital communities.

Should educators consider it their responsibility to educate students on the ethical and responsible use of AI tools, akin to how they teach the responsible use of platforms like Google and Wikipedia and tools like graphing calculators?

Daniel Frank: It’s long been my position that the technology is (and is becoming increasingly) ubiquitous, and that attempting to ban all use or consideration of the technology will not remove the tech from our students’ lives, but will instead remove only honest approaches and conversations about the tech from the classroom. Generative AI is a strange technology that can be easily misunderstood and misused. I think it’s much more productive to bring the tools into the light so that they can be critically considered, rather than swept into the shadows for students to use in all the wrong ways.

What are some strategies to foster students' intrinsic motivation through generative AI, focusing on methods that go beyond external incentives such as grades or assignment completion?

Dan Frank: It’s worth noting that the points-based, grade-focused approach of much of traditional education isn’t conducive to the valuing of personal growth and development. If education is framed as a transactive process where students are here to get their grade and move on, they will turn to tools that promise to automate/alleviate that arduous process. If we want to instill in our students intrinsic motivation, we’ll have to create spaces in our curriculum for experimentation and risk taking. Students should be encouraged to see LLMs as the limited technologies they are and to value their own critical thinking, choices, and rhetorical sovereignty when interfacing with these tools, but the threat to have their work be ‘perfect’ to get the points they need will short-circuit that process and tempt then to cut corners. I think it can be very valuable to try to think about how, for instance, a paper that clearly uses too much generative AI at the cost of clear, unique, personalized, critical thinking might serve as a learning opportunity rather than an ‘I caught you’ moment.

How can we harness AI to boost students' writing skills while ensuring they actively engage in the writing process rather than solely relying on AI-generated content?

Dan Frank: I think the key to this is to help students learn to value what they can bring to the table that AI cannot. It’s very important to help students learn to critically ‘read’ the output of a Large Language Model (LLM) such as ChatGPT. Though this is a revolutionary technology, it still is deeply limited: it lacks the deeper thinking, creativity, and critical thinking that only a human brain can bring to a paper. Students can be taught to see how LLMs produce predictable sentence structures, throw around unnecessary ‘fluff,’ tend to sound like they’re ‘selling’ rather than analyzing, make gestures at ideas but don’t really unpack them, and so forth. The second part of this is to help students demystify the processes of composition. Many students think that if they can’t produce perfect, beautiful writing at the first attempt, they won’t be able to at all–but concepts such as freewriting, iterative drafting, think-pair-shares, clustering and mind mapping (which LLMS might help with!) can help students see that writing is a constant, continual, developing process, and that this is true for even the best writers in the world. I think that in understanding both of these elements, students can learn to value the development of their own unique voice and will be less inclined to resort to LLM output at the cost of their own rhetorical options.

Learn more about the "Teaching with Generative AI" course.

Learn more about Daniel Frank