Letters | How universities can start to grapple with ChatGPT’s capabilities
- Readers discuss steps tertiary institutions must take as generative AI gains popularity, and the requirement that students wear masks during the DSE exams

First, we should address the possible false information provided by generative artificial intelligence. ChatGPT has not only been found to offer inaccurate information, it also generates fake data.
For instance, we discovered ChatGPT modified the publication year of a source from the 1990s to 2011 to meet the requirement to cite recent sources in an essay. The way it alters facts to suit certain prompts warrants caution. We need to do more to educate teachers and students on fact checking.
A related point is that generative AI may reinforce bias. When ChatGPT was asked to create images of professors, it came up with middle-aged male portraits. If users rely too much on generative AI, these age and gender stereotypes could be strengthened, setting us back in the quest for equity.
Worse, such AI nurtures a limited way of perceiving the world, which could be detrimental to human intellectual development and social harmony.