INTRO TO AI
How do you know that a human wrote what you’re currently reading? How do you know that The Norse Star staff didn’t just command an artificial intelligence (AI) program to create a cover story for us this month? Could you tell the difference between a human-created article and a machine-generated one? Although many may confidently respond with “yes,” they would be able to tell the difference and spot the use of AI, this might not be plausible anymore. The reality is that AI is growing and expanding, and from essay writing to Snapchat assistants, AI has seemed to infiltrate nearly every aspect of our lives.
According to David Williamson Shaffer, the Sears Bascom Professor of Learning Analytics and the Vilas Distinguished Achievement Professor of Learning Sciences at the UW-Madison School of Education, there isn’t just one strict definition for AI.
“AI is kind of a confused and fuzzy concept in people’s minds and in the field,” Shaffer said. “Before, let’s say, a year ago, or two years ago, pretty much anything that a computer did was called AI.”
When we think of AI, we tend to think of large language models—most famously, Chat GPT. Shaffer explained that programs like these basically scrape the internet in search of information they can use, legally or not. Then, they use this gathered information to predict what may come next, similar to typing suggestions commonly seen on cell phones or recommended wording on Gmail.
“These large language models are like autocomplete on steroids. And they’re trying to predict not just which word or two comes next, but what whole sentence comes next,” Shaffer said. Many suggestions turn out correct and are helpful to us. Others, not so much. Shaffer then joked that he views AI as “sort of like an undergraduate [student] who skimmed the reading and is trying to fake their way through class.”
There’s no denying that, at times, AI language models can gather correct information to create a cohesive product. If you ask ChatGPT to write an essay about a well-known and researched topic—let’s say World War II—it can do a reasonably good job summarizing the sources readily available on the internet. However, ChatGPT falls flat if you ask it to write an essay about a more obscure topic—for example, a report on a lesser-known book. Oftentimes, the AI language models will even go as far as making up their own quotes and sources with fake citations. Why? It’s simple; there’s less available information for the AI to base its answer on.
Although AI has its faults, it’s not entirely negative, according to Shaffer. AI programs can be used for a myriad of tasks, such as giving feedback, providing starting points, and gathering inspiration. However, the faults of AI are often broadcasted more than its usefulness, leading to many mixed responses from the public.
“We can try and ignore it. Basically, just pretend it doesn’t exist, stick our heads in the sand, put our fingers in our ears, and hope that it goes away,” Shaffer said. “We can try and outfox it, meaning we can try and come up with assignments that ChatGPT can’t do.”
Included in this could be the use of AI-detecting software in addition to the plagiarism policy we already have in effect. However, Shaffer argued that there are better ways to deal with this new, expansive technology than these methods.
“We can teach about large language models, and we can explain what this technology is and how it works. We can talk about its ethical problems which, […] there are quite a lot of,” Shaffer said. “The last response is that [we can] teach with it, and we can incorporate [it into] teaching. […] We need to teach students how to do it because it’s going to be a life skill eventually.”
Schools are supposed to prepare students for the world they are about to enter—one that will also include AI, according to Shaffer. However, AI’s exact role in the future is vastly unknown.
“[AI is] going to have some role. Certainly. One of my mentors once said, what can be done is a technological question, what should be done is a pedagogical question, what will be done is a political question,” Shaffer said. “At this point, we basically don’t know the answer to any of those things. We don’t really know what it can do or will be able to do. We don’t know what we think it should do. And we certainly have no idea where the politics and economics [of AI] will wind up.”
AI’s impact could transcend to the workplace, too. It’s not the first time new inventions have cut jobs. One example from Shaffer is the prevalence of robots in factories. When robotics began entering the workforce, they could create more products faster than human workers, causing many to lose their jobs. With the recent expansion of AI, is a similar effect awaiting us?
According to Shaffer, a variety of scenarios are possible. On one end, AI could replace many white-collar workers, or AI could simply become a tool for us to use, the same way Excel is. After all, there was a time when calculators were viewed similarly; now, we practically use them daily, and we’re allowed to use them on tests.
Moral issues follow AI as well. Is autocompleting stealing? AI language programs didn’t get consent from creators to use their work; is repurposing it ethical? Does this even qualify as repurposing? Even if we wanted to, is it feasible to attempt to regulate AI, considering its massive rise in use as of late?
“I think there are a lot of moral and safety concerns about AI. Do I think we can address them? I think some of them can be addressed,” Shaffer said. “Can we make it harder for people to do malicious things [with AI]? Yes, absolutely. Can we be more careful about the biases that are in AI? Can we be more careful about the extent to which it’s unfairly replacing workers because it’s stolen their ideas in the first place? Yes. […] We can certainly legislate and negotiate to avoid some of these things. But no, I don’t think we can get rid of them entirely.”
Does AI’s negative uses outweigh its positive ones? Considering its moral implications and rapid expansion as of late, should we be concerned about AI?
“We should always be wary of [AI] because the pace at which technology advances is not necessarily equal to the pace at which our understanding of it advances, and it’s certainly not equal to the rate at which our political system responds to it,” Shaffer said. “We should be concerned. I would say the same thing about any new technology; this one is just moving particularly rapidly.”
AI, just like any other technology, is not inherently anything. One can’t simply declare AI evil or immoral or even perfect and helpful; you have to gather your own thoughts on AI, and the best way to understand it is to try it, according to Shaffer.
“You should try [using AI]. Try using it and see. And when you try it, don’t just try and ask it cocktail party questions and see whether it can sound like it gives a reasonable answer. Ask a question about something that you know about, […] something where you can evaluate the quality of its answer. See how easy or hard it is to use it, see how often it makes mistakes, how often it does things that are [something someone your age would do],” Shaffer said. “I encourage people always to be educated about what’s going on in the world around them, certainly about technology.”
AI IN THE CLASSROOM
While AI can be used for simple tasks such as playing music, setting reminders, or adding an item to your grocery list, it can also be used—positively or negatively—in the classroom. For students, that could be writing essays or studying for tests; for teachers, AI can be used to write quizzes or lesson plans. But even though AI is supposed to know everything there is to know, it’s not perfect.
Stoughton High School Language Arts teacher Brenna Squires has experienced AI use in the classroom, even experimenting with it herself.
“[Teachers] have been trained to recognize what’s a good quiz or a good lesson plan. You have to actually check [the AI’s] work. So while it could be a great time saver, I think it’s still incredibly flawed,” Squires said. “I had tried to write an “A Long Way Gone” quiz, but [it] was 75% wrong, and then it wanted me to make corrections. It wanted to use my knowledge of the book to make itself smarter, and at that point, I was like, ‘No way, not helping you.’”
AI in the classroom can be difficult for teachers because there isn’t a solid policy around it. It can also be tricky to differentiate between permitted AI like Grammarly, which merely suggests, versus ChatGPT, which will write an entire essay.
“Most of it boils down to if the students’ thinking and critical thoughts are the biggest part of what they’ve presented, or if AI is giving the biggest chunk,” Squires said.
It’s not always easy for teachers to detect AI in students’ papers, but they have some tools. A majority of language arts teachers use turnitin.com to check for plagiarism. AI fits into that category because it’s still taking work that isn’t yours and claiming it is. Another way teachers can detect AI is through the fact that they learn students’ writing styles. If something seems off, they can plug a piece of that writing into Google, and if it comes up as a website, the student most definitely used AI.
AI has been evolving for years, becoming smarter every day. It even has the ability to put quotes and in-text citations into essays, although lacking the individual’s understanding and thinking. But why has AI advanced and evolved so much in such a short period of time?
“At first, it kind of scared me because I thought, ‘it’s going be an easy way for kids not to learn how to write or think or do anything because it’ll do it all for them.’ But over time, I think I’ve realized that’s just the next phase of life […]. So, we need to learn how to embrace it,” Julie Lynch, another Language Arts teacher, said. “In general, society wants advancements and to make our lives easier and to have less work so we can enjoy more things. And this is just the next logical step of, you know, having robots think for us.”
As AI becomes so advanced and prevalent in our society, teachers and students must work around it and adjust to a new way of life and learning.
“I think that the curriculum is going to completely change. We’re not going to have to teach some of the things that we’ve taught in the past because AI will do it for us. So, I think it’s going to shift more towards discussion-based, socio-emotional learning, coping strategies, and maybe […] reading and less things that a computer can do,” Lynch said.
Squires agrees with the sentiment expressed by Lynch— that more education surrounding AI is needed.
“We need to give you guys more background into why AI is beneficial and also why it’s not going to help you learn and become more fully developed human beings,” Squires added.
SHS administration has made attempted steps to ban students from using AI for cheating purposes. Many AI sites like ChatGPT have been blocked on all student Chromebooks. But does this really have successful outcomes?
“I don’t think [banning AI sites on students’ Chromebooks is] effective. You guys walk around with computers in your pockets with your phones. All you have to do is pull it up on your phone and type it into whatever, anyway,” Squires said. “I think it’s an attempt to assert some control over a situation that we don’t control yet. So I think it’s an appropriate step […] but don’t think it’s really enforceable.”
If AI keeps advancing the way it has been, it’s likely to take over many functions, jobs, and other categories of industrialism that humans are needed for in today’s society. It’ll take time to adjust and get used to the fact that robots aren’t just part of sci-fi movies, but are real. Especially in schools, the presence of AI will change a lot about how students learn and how teachers teach.
“As a professional, I don’t want [AI] to take my expertise away from me. As a student, yes, it’ll write a decent essay […], but it’s still not perfect. As students, you guys need to learn to be critical thinkers; I think AI kind of robs you of that. It’s cool and efficient and really, really a neat piece of technology, but I don’t want it to make us all dumber,” Squires said. “When we think about education and how it works, and you think about how students grow through practice, replacing that practice with computer-generated responses has big-time consequences for kids. And beyond that, what kind of society we want to have later. […] Who’s really making [all of] the decisions? Is it you? Is it the computer? I worry about that. That as a country or as a culture, you would become completely dependent on something robotic.”
AI IN ART
AI has reached high levels of intelligence, allowing it to create artwork and songs with a simple explanation of what you want. Since computers can do more things than the average human, where does that leave artists who make a career out of what they do? Or musicians that write from the heart? How do we know if AI creates a piece of art or melody?
Although AI can create art, should it? According to SHS art teacher Jason Brabender, AI art is less genuine than human-created art.
“I think the idea is that if you’re just going to say, ‘Make me a blue painting of a landscape,’ and boom, the computer just does it, and then you print it, and you sell it; I think that loses the human touch,” Brabender said.
Although AI is a tool that people can use to be more efficient, it’s easy to go overboard and we can lose ourselves in the world of technology.
“Well, I think that because the computer can do so many things that a person potentially can’t. It does have some function that would be beneficial because you can just tell the computer to spin this story together, and poof, you’ve got something, whereas it might take months [or] two weeks if you just want to spitball an idea for real, you know, instead of just having the machine kick it out very quickly,” Brabender said.
A similar result seems to be happening with music: AI is being used in its production. This can be seen, most famously, through the release of the most recent and final song by The Beatles, Now and Then. The song was released on Nov. 2, 2023, and uses AI to separate vocals from John Lennon—who died in 1980—to produce a cleaner product. However, will AI’s involvement end here? AI can already replicate voices, but to what extent will this continue? SHS music teacher Ryan Casey thinks this use of AI in music could be regulated in the future.
“I mean, there might be how music is coded, [there are] wav files and mp3 files, and when you play a song, you can get information on it. Maybe there’ll be some law that puts in [that] this was automatically […] made, or it shows that it’s AI,” Casey said.
Music and art are not the only things that AI is taking over. It has also used CGI to manipulate faces, like in Star Wars: The Rise of Skywalker. After Carrie Fisher, who played Leia Organa, died, her character was still needed in the movie. The movie directors used AI to create her face and voice using a stand-in actor and CGI.
According to Shaffer, “The recent Screen Actors Guild strike was about whether or not studios could take their likeness and then recreate it and use it in AI. Use it with AI to create more movies that the actors aren’t compensated for. Same thing about script writers, right?”
MILES VS. AI
We decided to try out AI’s capabilities when it comes to creating art, giving the two same prompts to our Head Artist, Miles Heritsch, and also to a Canva-based AI program. Here are our results:
Prompt #1: A group of people playing music in a band
Prompt #2: Two pet fish meeting for the first time in their tank
FINAL THOUGHTS
Although AI’s full capabilities and limitations are currently unknown to us, one thing is for sure: AI and its impact are here to stay. AI’s future in our daily lives may be positive—from aiding in the classroom to enhancing art and music—or it could yield disastrous results, the same way many new technologies throughout history have divided us. The possibilities are endless and greatly unknown. As AI expands, so does our understanding of its role in our society.