Skip to main content
Apply

Arts and Sciences

Open Main MenuClose Main Menu

Dr. Rittika Shamsuddin

Department of Computer Science

 

Rittika Shamsuddin


Imagine you have an AI system that helps doctors diagnose diseases. It tells a patient they have a particular illness, but when the patient asks why, the AI says, "Trust me, I'm right." That's not very helpful. We want to know why the AI made that decision so that we can trust it and feel confident in its recommendations.

 

Imagine you're applying for a loan, and your application is evaluated by an AI system that determines whether you'll be approved. The AI system simply gives you a "yes" or "no" decision without any explanation. If the loan is rejected, you might wonder, "Why was my loan application rejected? Was it because of my credit score, my income, or something else entirely?"

 

Without any explanation, it's challenging to know what factors influenced the decision and whether there was any bias or error in the AI system's assessment.

 

Imagine a company is looking to streamline its hiring process and implements an AI system to screen job applications and select candidates for interviews. The AI system analyzes resumes and assesses applicants based on factors such as education, work experience, and skills. While this approach may seem efficient, there's a growing concern that the AI system could unintentionally introduce bias, potentially leading to unfair hiring practices, e.g., not choosing a female candidate with high credentials, because historically, the company has had more male employees.

 

That's where XAI comes in. It focuses on developing AI systems that not only make accurate predictions but also provide explanations for their decisions. Think of it as a "black box" that we can open and peek inside to see how it works.

XAI tries to answer questions like:

  • Why did the AI system make a particular decision?
  • What factors or features influenced its decision the most?
  • Are there any biases or limitations in the AI system's reasoning?

 

By providing these explanations, XAI helps us understand how and why AI systems arrive at their conclusions. This transparency is essential because it allows us to trust and verify the AI system's decisions, identify potential errors or biases and even learn from the AI system's insights- thus, helping to build accountability and enabling us to use AI responsibly and reliably.

 

Consequently, my research centers on the development of XAI models specifically tailored for healthcare and social welfare applications.

 

I am also actively involved in promoting diversity and personal growth within the field of computer science and data science. My initiatives are aimed at encouraging and supporting women in computer science while fostering leadership and communication skills among our students.

 

Back To Top
MENUCLOSE