Primary and Elementary Education presents the Bias Challenges for Artificial Intelligence and EdTech

June 13, 2021

Carrie Purcell, Co-Founder & Chief of Partnerships and Strategy for Tech AdaptiKa discusses artificial intelligence and EdTech and associated biases in elementary education

 

Primary and Elementary Education presents the Bias Challenges for Artificial Intelligence and EdTech

Artificial intelligence is changing the field of education through EdTech. And it’s possible to define AI as a tool that develops and studies systems that simulate or imitate human skills – with the potential to reason and improve the ability to imitate (or even surpass) those skills. Part of that improvement involves its potential to add more fairness to a system that has resisted changes for decades (if not centuries). Still, it does present its own set of ethical obstacles. Some of the most evident in this still developmental stage of EdTech include Algorithmic bias, Racial bias and Learning bias. Indeed, the fact that AI can have multiple definitions makes it subject to as many biases.

Consider that for artificial intelligence there can be no single definition. Even rudimentary calculating machines such as the abacus or the 17th century machines designed by Blaise Pascal or Wilhelm von Leibniz fall into the category of AI; and certainly, Charles Babbage and the ‘analytical engine,’ which anticipated and influenced the characteristics of modern computers. And then there’s Alan Turing who pioneered the field that would become computer science during World War II. Still, the actual term ‘artificial intelligence’ was coined by mathematicians John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon in a 1955 promotional article to advertise a conference on this very subject at Dartmouth. It is a concept that covers many topics that pertain to as many different disciplines, from computer science to mathematics, neurology, driving, design, even writing and of course education. And each of these disciplines will define AI differently: that is, using their own inherent specialized bias. Nevertheless, whereas, AI or EdTech serves as a valuable support tool in education at the advanced high school or university level, where the individuals who use it are generally better equipped to make informed choices regarding their training paths. Indeed, because of bias-induced weaknesses at all levels of automation and predictive technology when actions relate to behavioral activities, some researchers suggest either avoiding AI altogether – or adopting strict controls to mitigate it.

It is in primary schools that students learn to develop their communication and language abilities. And it is during this crucial period that both human and AI biases are highest. The algorithms that allow an artificial intelligence system to learn and adapt to the individual student will inevitably transmit racial or gender prejudices that may be hidden. When an artificial intelligence (AI) system learns a language from text, it also assimilates the racial and gender prejudices of the human beings who produce that text. An EdTech solution may be no less impartial than a human one in the same circumstances. It is at the elementary and primary levels of education that the socio-economic context of the individual students can enhance or detract from their formative experience. Therefore, students from less educated parents, or immigrant parents who don’t have professional competence in English (or whatever the dominant language might be, depending on country) living in poor neighborhoods will arguably face learning and stimulus disadvantages even in EdTech rich environments.

EdTech is useful in reducing the bias; because, AI can be used to design appropriate tools that account for potentially prejudicial attitudes and behaviors in human beings. And also because it has raised awareness of the established prejudices and cultural stereotypes that humans possess as the Word-Embedding Association Test (WEAT) has shown: that is that along with language, AI systems also absorb implicit bias. For example, humans can discriminate the candidates they choose to invite for a job interview simply by looking at their names. A US study showed that those with European names had a better chance of being invited than those with African-American names. Unless the algorithm were designed to avoid this specific issue, the same would happen if the choice had to be made by an AI system. Moreover; AI or EdTech can enhance problems related to student privacy or the biases, and not necessarily those tied to gender or race; rather, those of the programmers. Some of these could incorporate corrupt data into algorithms, favoring certain types of students as opposed to others – or compromising security.

There are therefore numerous risks, but ultimately more opportunities that Artificial Intelligence and EdTech can have in the field of education stemming from the programming of algorithms. Yet, AI has an advantage in that its very implementation and design process serves to identify and address human biases, which few education systems had ever considered, let alone identified.  Artificial intelligence has opened opportunities that are exciting, yet largely unexplored. And like all areas of human endeavor – whether in geography, science or ideology – that become the object of intense inquiry, there is a tendency to rush the ‘adoption and adaptation’ of new realms.

It is the natural dynamic of revolutions to trample over everything else that preceded without consideration for the past. Consider such events as the European discovery and colonization of the Americas, the French Revolution (or the Russian), the industrial revolution, quantum physics or the Internet itself. Indeed, these giant ‘paradigm shifts’ in their respective fields destroyed the intellectual foundations from which they emerged without paying any attention to the economic, social and ethical realities they disrupted. Our society today praises ‘disruption’. But there’s sufficient evidence from history, recent and ancient, that we know very little about how new ideas and technologies ultimately affect societies. The results are often beyond prediction. When asked in 1968 what he thought about the impact of the French Revolution, the late Chinese premier Zhou Enlai is said to have answered: “it’s too early to say”. It turns out, Zhou was actually responding to the 1968 student revolt and not the 1789 Revolution that overthrew the monarchy. Nevertheless, the ‘mistaken’ answer has resonated; because, when we think about the French revolution, it soon becomes apparent that it is too early, or too complex, to fully describe the impact of its disruption. If historians and social scientists can draw different conclusions from an event that happened over 200 years ago, then it’s more than reasonable to consider the implementation of Artificial Intelligence and EdTech from a variety of viewpoints to mitigate, or soften, its impact. And the impact will be different for every application.

Carrie Purcell is Co-Founder and Chief of Partnerships and Strategy for Tech AdaptiKa. She is a visionary and a digital transformation leader with a strong background in research and innovation in the Higher Education Industry striving to create the future of the classroom.

You can learn more about EdTech here: https://global-edtech.com/edtech-definitions-products-and-trends/