CODVIP|YY777|yy777 casino|YY777 Official

davaowin Commentary: AI detectors don't work, so what's the end game for higher education?

Updated:2024-10-10 04:03    Views:180

SINGAPORE: When ChatGPT was first released, it caused a panic in the education sector. Many schools and universities banned its usedavaowin, fearing it would destroy students’ ability to learn and be tested properly.

However, less than two years on, the tone has changed dramatically. The International Baccalaureate (IB) allows artificial intelligence to be used in the completion of schoolwork, AI tools are acceptable in academic writing for publication, and educators from primary schools to universities are incorporating ChatGPT into school assignments.

While these developments raise ethical concerns, one thing is clear: Banning AI in education is like trying to hold back a tidal wave with a teacup. Instead, we need to learn how to use it.

However, recent publications have stated that while universities in Singapore do encourage the critical of use of AI tools in academic work, they also may use AI detection technology programmes such as Turnitin’s AI detector.

While there is no problem in using these technologies to educate, we need to be crystal clear with students and educators that these tools have limitations and can’t be a basis for punishing students.

Furthermore, familiarising ourselves with the benefits and limitations of the current AI models prepares us for tomorrow’s advancements. Recently, OpenAI released one of the world’s most powerful models, GPT-4o, for public use. GPT-4o handles input in audio, text and visuals, and produces sophisticated outputs.

These tools are only going to get better with time. If we think about the first mobile phone, to the sleek, powerful smartphones of today, we can get a sense of what’s to come.

Related:Commentary: ‘How do I prove my innocence?’ Casting students as would-be cheaters eager to exploit AI tools is disheartening Commentary: We pitted ChatGPT against tools for detecting AI-written text, and the results are troubling PITFALLS OF “CATCHING” STUDENTS WITH AI DETECTORS

Despite this, many universities try to use AI detectors to “catch” students submitting AI-generated work and then penalise them for it.

AI detectors are designed to identify text generated by AI systems like ChatGPT. They work by analysing patterns and word usage typical of AI writing tools.

However, these technologies can be unreliable and potentially biased. Our recent research project demonstrates that simple techniques such as adding spelling errors can reduce the effectiveness of AI detectors by up to 22 per cent.

Almost all students who use AI to write essays will be editing and modifying the output - meaning detection won’t work well, if at all. To be blunt, if a student’s work shows up as entirely AI-generated, all it means is that they are not very good at using AI. Only in the simplest cases of copy-and-paste is an AI detector guaranteed to give a positive result.

AI detectors struggle to keep up with quickly changing AI models, and their reliance on standardised measures of what is considered “human” can unfairly disadvantage people who speak English as a second or third language. The potential of falsely accusing students and damaging their future raises serious concerns about the use of AI detectors in academic settings.

Furthermore, this approach is counterintuitive in a world where we should be reaping the benefits of AI. You can’t extoll the advantages of using a calculator and then punish students for not doing math in their heads.

Educators shouldn’t rush to punish students based on what AI detectors say. Instead, they should think of better ways to assess students.

Related:Commentary: Want ChatGPT to do your homework? Learn how to use it first Commentary: I am a teacher and I let my students use ChatGPT A DIFFERENT APPROACH TO AI USAGE IN EDUCATION

AI tools have much potential in education. They can assist students in brainstorming ideas, structuring their thoughts and editing their work to improve clarity and coherence. By using these tools, students can enhance their digital literacy and prepare for a future where AI will play a significant role in various professional fields.

But since detecting AI is a dead end, what should educators do when they can’t tell if a student’s work is “their own”?

One solution for educators is to move away from a binary “AI or no-AI” policy, and adopt a scaffolded approach that makes clear to students how much AI can be used in completing a task. Educators can provide a range of assessments where AI use is allowed to varying degrees - for instance, AI-generated content can be used to help improve students’ work in essays, but banned at in-person examinations.

This gives educators a picture of students’ knowledge and abilities without technology, and also how adept they are at utilising technology.

Working with colleagues in Vietnam and Australia, we developed a tool called the AI Assessment Scale (AIAS). This scale allows educators to tailor AI usage to the needs of different subjects and assessment types, ensuring that AI enhances learning outcomes without compromising academic integrity.

It empowers teachers to stop fretting over whether their students did or did not use AI and focus on teaching students to engage with AI tools ethically and responsibly. This involves providing guidance on proper citation of AI-generated content and fostering an understanding of the limitations and potential biases of AI tools.davaowin

The AI Assessment Scale (Table provided by the authors)