๐๐ ๐ฎ๐ป๐ฑ ๐ ๐ถ๐๐ถ๐ด๐ฎ๐๐ถ๐ป๐ด ๐๐ถ๐ฎ๐ ๐ถ๐ป ๐๐ฐ๐ฎ๐ฑ๐ฒ๐บ๐ถ๐ฎ
Many of my colleagues in academia have shared concerns about the use of generative AI (GenAI) in academic contexts. One concern relates to GenAI producing text with bias due to biases inherited from human-generated training data. I'd like to follow up on two recent posts - one on AI and academic writing, and another on accessibility - with a few thoughts about LLMs, text, and bias.
๐ช๐ต๐ถ๐น๐ฒ ๐๐ฒ๐ป๐๐ ๐ฐ๐ฎ๐ป ๐ถ๐ป๐๐ฟ๐ผ๐ฑ๐๐ฐ๐ฒ ๐ฏ๐ถ๐ฎ๐, ๐ถ๐ ๐ฐ๐ฎ๐ป ๐ฎ๐น๐๐ผ ๐ฏ๐ฒ ๐ฝ๐ฎ๐ฟ๐ ๐ผ๐ณ ๐๐ต๐ฒ ๐๐ผ๐น๐๐๐ถ๐ผ๐ป. Our students come from diverse backgrounds and the language we use plays a role in creating environments where all thrive. Large Language Models (LLMs like Claude, Gemini, or ChatGPT) can help identify biased language and elevate underrepresented perspectives, helping to facilitate the use of inclusive language. GenAI tools can scan academic writing (e.g., teaching materials, publications) to identify issues like gendered terms and amplify underrepresented voices.
I'm excited about AI's potential to identify subtle biases in written documents. When deployed responsibly, AI can serve as a powerful tool to support accessibility, inclusion, and diverse representation in academia.


