Enhancing Academic Integrity
with an AI Writing Detector

To develop and implement an AI writing detector that enables instructors to distinguish between student-generated content and content produced by AI

In the new and exciting era of text transformers and generators like ChatGPT, Gemini, Bard, and Perplexity, the line between human-written and AI-generated content blurs faster than ever. This presents a significant challenge for educators striving to maintain academic integrity, especially with students readily accessing sophisticated tools. That’s where Tii steps in.

Tii aims to uphold academic integrity by differentiating student-written content from AI-generated text. It addresses the growing challenge of sophisticated AI in student submissions, allowing educators to swiftly verify the originality of written assignments. Utilizing cutting-edge algorithms, Tii analyzes writing patterns, consistency, and linguistic markers to pinpoint AI authorship. Through an intuitive interface, Tii simplifies AI detection, fostering a fair learning environment that values original thinking and genuine effort.

The Problem

Powerful text generators like ChatGPT and Gemini blur the line between human and AI-written content. This jeopardizes academic integrity and hinders genuine learning.

The Solution

AI writing detector by Tii, was developed to address this challenge. It uses advanced algorithms to identify linguistic markers typical of AI, helping educators quickly verify the authenticity of student work. 

My Role

As the lead interaction designer for Tii, I crafted the user experience and interface, focusing on simplicity and efficiency while working with a team of developers and data scientists.

I. Discover

Battling the Bots

The proliferation of sophisticated AI writing tools like ChatGPT and Bard throws a wrench into academic integrity. Students can effortlessly generate near-human-quality text, jeopardizing authenticity and hindering true learning. Traditional plagiarism detection tools often fall short against this new wave of AI-powered content, leaving instructors wrestling with identifying and addressing the issue.

Attitude

At this point, it’s best if you join me in removing our solution hat. This is a stage of listening and observing, allowing ourselves to be hit by the state of affairs and accepting it without wanting to change anything. Don’t worry, later we’ll put on the solution hat, but for now, we’re purely on receiving mode.

Methodologies

For the discovery process, I used numerous different research methods: observation, brainstorming, secondary research, divergent and convergent thinking, and affinity mapping, user survey, user interviews, empathy mapping, personas, and journey mapping.

Process

The overarching discovery process began with understanding then defining the problem area. Next came research, and finally, a synthesis of the research findings.

Throughout this stage, I used a perpetual cycle of convergent and divergent thinking referenced above as Double-Diamond Design Process, this was adopted as a result of complex problem we are trying to solve while maintaining an empathic mentality.

Defining the problem

To kickstart this project I had series of meetings with the product owners where app requirements were defined and also an avenue for me to ask some specific question bothering more on the following: 

Who are we building for?

Why are we building this app?

What is the solution we are offering?, What makes us different from others?

With these answers provided accurately and well detailed by the product owners, I was able to have a very clear idea of the product requirements and expectations from me, upon that I proceeded to the next phase of the project, conducting independent research into the people we are designing for.

II. Gathering Insights from
Educators

Defining the problem

From the outset, we understood the crucial role of user feedback in crafting a solution that truly meets educator needs. We established a diverse Customer Advisory Board (CAB) of educators representing various disciplines and experience levels. Throughout the development process, we conducted regular CAB meetings, showcasing mockups, prototypes, and early iterations of the tool. Their invaluable insights shaped key features, refined the user interface, and ensured the solution addressed their most pressing concerns.

Researching

In the research phase, I submerged myself in information regarding AI text generators and trailing conversations from instructors, institutions administrators, sales and support team, first through secondary research, articles, studies, and reports, and second through primary research of user surveys and testing.

Secondary Research

The first step I took was researching what others have found regarding the matter. Without dragging you through too much detail, this is what I found. (Information has been synthesized.)

Concluding from this data, clearly, the use of text generators has largely disrupted the educational world, setting panics to instructors and institutions on upholding academic integrity in the midst of bots.

Finally, and perhaps the most shocking, is the data that roughly 75% of Learning Management System are not working towards a solution yet, they are largely dependent on third-party and plagriarism checkers.

 

Competitors inisights

Following Twitter's conversations

Questions worth asking

Synthesis

After conducting in-depth user interviews, I synthesized my research findings by observing research findings for common patterns and themes. I did this with an affinity map, an empathy map, competitor analysis, compiling a persona, and finally, user journey map, using these resources and  composing a leading “how might we” statements to kickstart the solution process.

Workshop synthesis

User Stories

User Persona

A compilation of all essential patterns and commonalities uncovered during the user interviews and refined through affinity and empathy mapping, let me introduce Charles, David & Jenny and Sarah

Charles is our lead user which this project will solve for, a sounding board for the next step in the process, designing a solution, and overall the project’s north star.

With this clear idea of who the design is for, I then had enough information to flesh out possible solutions.

II. Design

Bringing the research to life

Research is great but kind of pointless if nothing is done about it. In this next stage of the process, I started designing a solution based upon Charlie and the “how might we” statement.

In this stage, there are two types of design. 1) Usability Design and 2) Visual Design. One determines how things will function and the other determines what it will look like.

Usability Design

I started the design process with usability design. After all, you can make something look good if it doesn’t exist.

I designed for usability through ideation, information architecture, sketching and guerilla testing, and wireframing.

Ideating

In this stage of the process, I brainstormed for product ideas by first conducting a brainstorming workshop with stakeholders across cross functional team members; designers, engineers, product writers, product managers, etc.

I settled on a “GoldWise starter kit” – a resource giving a personalized plan and resource guide for setting up a trading account based upon experience level, preferences, time, and financial budget.

User Flow: Onboarding Journey
User Flow: Onboarding Journey

Technical Flow Diagram

In this process, I made a site map and user flows for critical routes to aid users navigation and also to guide the usability design processes.

User Flow: Onboarding Journey

User Story - Visual Storytelling

Sketching is the first step in bringing the solution to life followed by guerilla usability testing to make sure that flows were functional.

Wireframes

With wireframing, I set out the basic structure of the product. I think of it like what framing is to building a house, or a skeleton to the body. It supports the end product.

Visual Design

With the product taking shape, it was time to turn to its brand and style. In order to get a consistent visual message which resonates with and informs the user, the Tii GreenHouse Design System was used consistently and precisely, with recommendations/contributions of new components to the library

Bringing Usability and Visual Design Together: High Fidelity Designs

With the structure and style set through the work done with the usability design and visual design steps, I could start working on the high-fidelity designs. Using the internal design system with the wireframes from earlier stages, I worked out issues for it to be both usable, fit within the style guide, and accessible.

Accessibility Audit

After designing a few high fidelity pages, I ran them through an accessibility audit to check for color and contrast and screen reader readability, this passed the WACG 2.0 guidelines.

III. Validate

Validating

In the validation process, I created prototypes with the high fidelity designs and did two rounds of user testing with two groups of 5 participants with iterations after each round to incorporate the findings of each round.

Each round of user testing was conducted over Zoom.

Prototype

The initial prototype was pretty straightforward. I used Figma to build the prototype and it went through the essential user routes(AI Writing detection workflow)

Usability Testing 1

The initial round of usability testing brought up 3 main usability issues.

Issue #1

Removal of semantic colors in determining level of AI used

Issue #2

Lack of additional context on where AI contents are generated – primary source

Issue #3

Lack of threshold settings, providing institutions and adminstrators provide allowable AI usage limits

Iteration

The first round of prototype iteration worked on solving for the main issues brought up in usability testing round 1.

Usability Testing 2

Provided a link or tooltip that explains the AI’s training data, model, and accuracy. Transparency builds trust.

Displayed a concise source label alongside AI-generated content by stating top suggestions of models used and reinforcing how our algorithm powers this detector.

Allowed institutions to set usage limits based on their needs and also decide to toggle on/off the detector.

 



IV. Reflect

Learnings

  1. AI writing tools pose a significant threat to academic integrity.
  2. Our AI writing detector offers a highly accurate and efficient solution.
  3. CAB feedback was instrumental in shaping the solution’s development.
  4. Testing with educators yielded remarkable results, impacting confidence, time savings, and learning outcomes.
  5. We are committed to developing solutions that support educators and empower students in the ever-evolving digital world.

 

Next Steps

 I would explore further a lot of the concepts I played around with in my sketches and low-fidelity wireframes. The feature is built around the idea of collaboration on projects and organizations, so I wanted to explore the idea of displaying these indicators students submissions, which would be beneficial in helping students submit authentic materials and improve their learning outcomes, also, I intend to look into consideration for non-native english speakers who may have use paraphrasing tools and the need for us to properly classify these tools.

 

""This tool is a game-changer! It allows me to focus on the substance of writing and student understanding, rather than spending hours investigating potential plagiarism."

Top CAB Quotes

"Knowing my student's work is truly their own is incredibly rewarding. It motivates them to put in the effort and learn more."

Top CAB Quotes

 Made with ❤️  from a global citizen 🌎