Teaching in the age of artificial intelligence and chatGPT

Artificial intelligence provides increasingly complex interactions between humans and machines. This technology, currently popularized by ChatGPT, presents a potentially huge opportunity for business people, for business students and for business educators. However, these same technologies pose similarly enormous challenges for educators trying to illustrate, explain, and apply these ideas to students. How can educators consciously and openly incorporate AI into our classrooms while reducing the risk of students replacing AI output with their own (also known as “cheating”)?

The term “metaverse” was first popularized in 2003 by Neal Stephenson in a science fiction novel titled “Snow Crash.” It is no longer fiction. The idea fueled online multiplayer games such as World of Warcraft. It has now evolved to allow all kinds of human interactions, including commerce, in two and three dimensions. Maybe the “person” you’re talking to isn’t actually a person, but instead a machine. When combined with artificial intelligence (AI), blockchain and decentralized autonomous (i.e. human-less) organizations – collectively known as Web3 – we are at the next frontier for our students to find business opportunities.

This is also a new frontier for education, enhancing the online and hybrid experiences used during the pandemic to reach students in remote areas with immersive, hands-on experiences. This could be the beginning of the “death of distance”, where learning can happen regardless of internet access.

As a professor at Harvard, Stanford and Hult, I ask graduate students studying innovation to use AI for their major theses. Here is the method to my madness. First, I ask teams of students to envision new businesses that either use multi-faceted platform marketplaces or generate revenue in the purely virtual world of Decentraland. They record these presentations on video. Second, I randomly assign students to critique these team presentations. In the template I provide, I ask several different questions: for example, how did the team design their network effects and how did they mitigate some of the headwinds that undermine the network effect? The final question in the series requires students to ask ChatGPT to write their own critique of the team’s idea. This requires students to iteratively improve the query to get the AI ​​to provide an optimally useful answer. This is a skill—asking the right question—that the next generation of business leaders needs to learn.

I also require students to independently verify the accuracy of ChatGPT’s responses. Borrowing from an expectation of Ethan Mollick, an associate professor at Wharton, I declare that students are responsible for the final conclusions they draw from AI. They must find sources to reinforce or reject the AI’s answers. Just as with any tool or external resource, students must include an accurate citation of their use of ChatGPT. Because I don’t know the ideal formal format for such a citation, I suggest they ask ChatGPT how to cite ChatGPT.

There are several objectives underlying this part of the thesis. First, students must learn to leverage AI to improve the breadth, depth, and articulation of their knowledge. AI is out of the box. We can’t put it back. Instead, just as with fire, iron, light, and other thought-provoking innovations, professionals must learn to best implement technology to solve customer and societal problems. Universally available AI will also change the skills that corporate graduates are expected to deploy soon after graduation. Rather than collecting and manipulating data to support business decisions, professionals must master the art of asking the right question so Web3 scans and organizes all existing public knowledge to provide a useful answer. Instead of building a sustainable competitive advantage on intellectual property protected by patents or trademarks, professionals of the future will make money through the speed of adaptation.

Second, AI is not infallible. It is not only wrong with the data it bases its answers on. It is also skewed by the people asking the question and how they ask. The only way to improve their conclusions – and thus the data that the AI ​​machine of the future considers – is to make human individuals responsible for the accuracy of the AI’s conclusions. This is the same method that academia used with the proliferation of books, with the development of statistical methods, and with reliance on the internet: authors are responsible for their conclusions.

Third, this task forces students to contrast their own wisdom with that of AI, allowing them to personally and directly compare their own experiences and logic with the responses of artificial intelligence. This reinforces the importance of human creativity and intuition as drivers of innovation.

AL can be good news for education. There is also a dark side: Web3 will present challenges for both teacher and student. The anonymity and scale of virtual worlds—which can drive equity and diversity—can lead to distraction and inappropriate interactions. DecentraLand, for example, is accessible in the two-dimensional browser to anyone without identity verification requirements. An avatar, a fake name and poor impulse control can lead students down unproductive paths.

AI is now fully capable of constructing a student’s assignment. I asked ChatGPT, “How can a teacher tell if a student wrote an essay themselves or used ChatGPT?” It answered:

There are several ways a teacher can tell if a student wrote an essay themselves or used a tool like ChatGPT to generate the text. One approach is to use plagiarism detection software, which can identify whether the essay matches existing texts on the Internet. This can be a reliable way to determine whether a student used ChatGPT or another text generation tool, as such tools often produce output similar to existing text.

Another approach is to ask the student directly. A teacher can ask the student to explain the thought process and reasoning behind the essay, and compare their answer to the essay itself. If there are significant discrepancies between the student’s explanation and the essay, it may be a sign that the student used a tool like ChatGPT to generate the text.

In addition, a teacher can also look for clear signs of text generation in the essay itself. For example, the essay may contain repetitive or meaningless sentences, lack coherence or logical structure, or use language or terminology that is not typical of a student at the appropriate level. All of these can be indicators that the essay was generated by a tool like ChatGPT.

This answer itself is almost indistinguishable from one written by a real person! Note that it even inserted gender-neutral language and transition words. Even after a decade of reading student submissions, I can’t distinguish this AI response from that of a human MBA.

A new tool, GPTZero, attempts to use AI to determine whether an essay was written by AI by calculating the ‘confusion’ score, which measures the consistency (from AI) or inconsistency (from humans) of sentence structure and word choice. A confused essay from a human has many inconsistencies. The obvious hole in this tool is asking ChatGPT to create a high confusion essay.

It’s an annoying axiom that technology moves faster than culture. The release of ChatGPT a few months ago has already changed the education landscape. My purpose as a learning servant is not to be an advocate for academic tradition, but instead to discover new ways to prepare MBAs for the professional world they will soon join. This includes ChatGPT. More to come.

Leave a Reply

Your email address will not be published. Required fields are marked *