Student perceptions

Navigating the AI Wave: Fears Regarding AI’s Role in the Future of Education

This guest post is authored by Faith Amorè Wallace (She/Her)

Faith is an international student pursuing a bachelor’s degree in psychology. She aims to navigate the evolving landscape of AI in education, shedding light on its transformative potential and the imperative need for thoughtful policies to harness its full benefits.

‘We tend to overestimate the effect of a technology in the short run, and underestimate the effect in the long run’ — Roy Amara; “Amara’s Law”

In the ever-evolving landscape of technology, each era witnesses a new wave of innovation that quickly and vastly reshapes the way we live, work, and learn. Referred to by media as a ‘technology boom’, or ‘technology revolution’. Artificial Intelligence (AI) is currently at the forefront of this transformative wave, and its integration into education has sparked concerns and fears about potential negative outcomes. As a college student navigating this dynamic environment, it’s essential to draw parallels with past technological revolutions to understand how we can address and mitigate these apprehensions.

Pay-to-Play: Acknowledging Advantages in the Academic Landscape with AI

One prevalent anxiety surrounding AI in education is the fear of worsening education’s pay-to-play system, where students with financial means can access greater resources of better quality, creating academic disparities. The comparison of ChatGPT+ to tools like Grammarly, which offer premium features for a subscription fee, is apt. Although there are plenty of resources that are hidden behind paywalls, including private tutors and academic websites (e.g. Chegg, Course Hero), these resources are seen as complementary — or resources that aid/improve a student’s original work. Students’ concerns lie more with generative content substituting a student’s original work — producing a submission, either in-part or in-full, under the guise of it being an original piece.

Students have grown comfortable with resources maintaining a respectful amount of personal effort regardless of the academic advantage, with plagiarism regulations catching all attempts to create unoriginal work. Generative Al, however, was not considered in the development of currently used plagiarism trackers (e.g. TurnItIn). As a result, tools that have been used to detect plagiarism in content need to reevaluate their systems to encompass AI plagiarism.

While the uniqueness of the content comes at the risk of incorrect information that could be hard to detect by an uninformed individual, the shortcut is enough for any student to question if they want their universities accepting AI with open arms. However, just as the internet matured and rules were established on plagiarism, fair-use, and misinformation, AI in education is destined to undergo a similar evolution.

It is of note that although it may be uncomfortable, AI may be the shove that education desperately needs to overcome the grade-over-learning rhetoric pushed in assessments. ‘Grades are seen as the currency of learning… We have created a system where we value grades more than learning.’. While a mass reformatting of assessment style will be a daunting task across the board, there are some potential advantages. Volunteering, extracurriculars, research, and practical applications of learning will become more valued compared to marks on essays and recall-style assessments. With this shift away from grades and numbers, using generative AI for higher marks will leave little-to-no value for academic nor employment success, and learning will be the bread and butter of education.

Until this shift is made, the essay-and-recall-focused assessments that universities have will lie on academic integrity and the development of AI plagiarism assessors alone. Rules and regulations will be crucial in ensuring fair application of AI tools on assessments, preventing an academic hierarchy based on students’ financial capabilities. Much like plagiarism rules for online content, guidelines will emerge to maintain the integrity of academic achievements in the era of AI. For example, universities have already forewarned students on the academic risk of being caught using generative AI to craft the majority of an assignment submission. While there is still room for improvement in detecting AI-generated content, generative AI as an academic resource is still a novel concept and will only see improvements in regulatory software as time moves forward, similar to the development of plagiarism trackers.

While it is easy to acknowledge the negative implication of a financial advantage, it is also important to take notice of how generative AI has the potential to level the playing field for students with disabilities, offering them a more inclusive and accessible educational experience. This one example reinforces the importance of hearing ALL student voices, so universities can find a middle ground to mitigate fears surrounding AI, while overall benefiting students’ future education and careers across the board.

The Sceptical Lens: AI’s Struggle with Bias and Misinformation

Students’ scepticism towards using AI further revolves around its perceived limitations and inaccuracies. Generative AI displays issues in understanding nuance and context, particularly in areas where subjective judgment and creativity come into play. Though it can create unique material, these efforts are still guided by algorithms that are limited to the content that was included in their training process. It is imperative to acknowledge that technology systems have historically faced challenges of limitations, bias, and inaccuracies in their early stages.

Google, often hailed as the epitome of reliable information retrieval, is not without its flaws. The engine’s algorithms, shaped by the vast data they process, may inadvertently perpetuate biases and present flawed information. A notable example is the criticism surrounding Google image searches for ‘CEO’, where results seemingly reflected a preference towards white men. Google image searches were under constant scrutiny for similar instances, such as a search for ‘unprofessional hair’ reflected a preference towards black men and women. These instances underscore the fallibility of technology, revealing that even widely used platforms can be susceptible to misinformation and bias.

Technology systems, including AI, are inherently weak and fragile to misinformation and exploitation in their nascent stages. However, the strength of these systems lies in their ability to evolve and improve with user interactions. Users play a pivotal role in reporting flaws, identifying biases, and prompting developers to address these issues.

As more users engage with AI tools, the systems gain insights, learn from diverse inputs, and adapt to address their shortcomings. This iterative process is fundamental to the maturation of technology, ensuring that it becomes more reliable, nuanced, and adept at understanding the complexities inherent in human expression and creativity. Fears on AI that are stopping students from interacting with these platforms may only be slowing the inevitable process of AI integrating itself into everyday academia and various occupations. These early days are the best time to explore AI’s possibilities, discover boundaries, and co-create the future of the technological world as it is still being developed. The importance of this is best captured in a common saying floating around current AI discussions: ‘AI will not replace humans, but people who can use it will’.

The Importance of Dialogue: Shaping the Future of Academia

In navigating the integration of AI into education, the dialogue between students and academic staff is paramount. Students must articulate their desires and expectations regarding AI’s role in academia, and educators must translate these aspirations into a framework that upholds academic integrity and fairness.

It’s essential for universities to proactively engage in these conversations, fostering an environment where the incredible possibilities of AI can be explored without compromising the fundamental principles of education. This ongoing dialogue will be the linchpin in shaping a future where AI enhances learning experiences, promotes inclusivity, and prepares students for the careers of tomorrow.

While fears and anxieties about AI in education are valid, history shows us that with thoughtful regulations and open communication, we can navigate these challenges and harness the transformative power of AI for the betterment of education. Let the conversation continue and let us collectively shape a future where AI is a tool for empowerment, not a source of division.

By Faith Amorè Wallace

Faith is an international student pursuing a bachelor's degree in psychology. She aims to navigate the evolving landscape of AI in education, shedding light on its transformative potential and the imperative need for thoughtful policies to harness its full benefits.

Leave a Reply

Your email address will not be published. Required fields are marked *