Is artificial intelligence culturally intelligent?

The question of whether or not AI is culturally intelligent necessitates addressing a number of connected issues: who creates and controls AI, who creates the narratives around AI and how is it represented, whether AI can think for itself, and whether AI has values. Artificial intelligence, and the narratives about it are becoming part of our culture(s). The way it is used, and the impact it has across cultural contexts and different societal groups should be a concern to cross cultural management scholars.


Before addressing the specific question, let’s cover the basics. To do this we need to be clear on what artificial intelligence (AI) is, and what it is not. AI is a tool. It is not an end in itself. The dangers expressed of AI as an existential threat appear to be related to a concern that AI could become an end in itself. In popular literature, if you have ever read the excellent I Arise by Kevin Wignall, an AI on trial to determine if it is sentient (begs the question: is it possible to be intelligent without being sentient?), or Matt Gemmell’s Jinx where an AI breaks out of its computer, starts killing people for a higher purpose it has set itself, then you will get the message.

But, apparently, this is not just the stuff of fiction. It seems this threat isn’t to be treated lightly. A number of leaders in AI got together last year and signed the statement:

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

Pretty scary stuff if you consider many of these are the creators of AI: the likes of Sam Altman (CEO, OpenAI), Dario Amodei (CEO, Anthropic), Bill Gates, Demis Hassabis (CEO, Google DeepMind), as well as many computer science academics and politicians. But why should those with such a vested interest in AI, in all its commercial and profit-making splendour, be warning against it?

These key figures are not just the creators of AI, they are also the creators of the dominant narratives about artificial intelligence. Narratives are part of culture. They are created in the cultural and social context. They are driven by power relations within that context. And, they need to be understood by cross-cultural management scholars as part of culture(s), and challenged within the cross-cultural context.

That the loudest voices on AI’s threat to humanity are coming from Silicon Valley is signifiant according to Mhairi Aitken in a recent article in New Scientist (The real threat from AI, 1 July 2023). Seemingly at odds with the interests of big tech, this narrative appears to diverts attention from their current actions towards future speculations of the abilities of AI. It deflects accountability today of those who fund and create AI to what AI might become in the future. It creates an illusion of inevitability rather than outcomes being a result of decisions by people and organisations that are entirely controllable: AI models only do what they are programmed to do. People such as Altman are being listened to by, for example, European regulators, while those negatively affected by AI are rarely heard. Those that create the technology have also shaped the dominant narrative. As AI becomes more and more part of our culture(s), including within the domain of business and management, these narratives are important, particularly as we start to look across cultures.

Abebe Birhane speaks of ‘algorithmic colonisation’, writing specifically about the affects of AI on Africa. This involves two aspects. Firstly, ‘Western imported technology not only encodes Western values, objectives, and interests, but also impedes African-developed technologies that would serve African needs better.’ And secondly, ‘various practices such as data collection, formalization, and algorithmicization of Africa by Western institutions, organizations, and corporations constitutes a form of digital colonization.’ She suggests that despite what is argued by tech companies aiming to liberate the ‘bottom billion’ (see bottom-of-the-pyramid), for technologies to be liberating they have to emerge from the needs of communities and be controlled by those communities.

She questions whether AI software developed and encoded with the values, norms and interests of (a sector of) Western societies can be relevant and appropriate to African users. The narrative created, mainly by ‘a homogeneous group of predominantly white, middle-class, males from the Global North’ that create and control AI is one of overhype and exaggeration of the capabilities of the technology as existing and learning independently of those who created it. It is difficult to challenge this narrative that further sees AI as a means of lifting Africans out of poverty or disease (see also mobile money as another example, and one that Birhane also points to). This appears to assume that the nature of social, economic, educational and cultural problems is universal. These, therefore, can be addressed by importing AI technology that is assumed to be objective and value-free. It also appears to, again, project Africa as a continent that cannot help itself. And, it also seems to raise AI above the understanding of those other than the specialists that work on the creation of the algorithms that drive AI systems such as large language models.

Yet it also appears to inhibit the development of technology from Africa itself. Birhane points to the example of one of Africa’s more technically advanced countries, Nigeria, which ironically imports 90% of all software used in the country. It only produces extensions and add-ons for such, mainstream, imported software. As, she suggests, the tools built in western countries by predominately middle class white males ‘embed, reflect and perpetuate socially and culturally held stereotypes and unquestioned assumptions’, these often lead to injustices and discriminatory treatment of less powerful groups. The smart surveillance software, Vumacam in South Africa, assumes that most criminals are black males; and the COMPAS recidivism algorithm used by the US criminal justice system to predict the likelihood of criminals reoffending: black defendants are twice as likely to be misclassified at a higher risk of violent reoffending as their white counterparts. There are many other examples. Consulting those who are likely to be negatively impacted might be a good idea. Encouraging AI development in countries such as Nigeria might be a better idea, and may according to Birhane, counter the negative effects of ‘Western AI systems founded upon individualistic and capitalist drives’.

Whiteness and intelligence

Intelligence is white. This is certainly true of the history of intelligence (IQ testing marginalising ethnic minorities in western countries), how it has been conceived and how it has been studied (from its heyday in the 1970 exemplified in the work of Eysenck). It also appears to be true of cultural intelligence, having been conceptualised not within the centuries-old contexts of the multicultural societal and business environments of African and other countries in the South, but in the increasingly cross-cultural environments of business and organisational life in Western countries over the last couple of decades. And, it appears that artificial intelligence is white.

Sophie the robot is UNDP innovation champion
Sophie the robot is UNDP innovation champion

Whiteness is invisible. Only those other than white are attributed with race. Whiteness is normative, an unexamined racial default. It is an unmarked identity, much like heterosexuality. Hence Stephen Cave and Kanta Dihal argue that the whiteness of AI appears to reflect the predominantly white milieu from which it arises, and hence to imagine intelligent, professional and powerful machines is to ascribe those characteristics that are predominately ascribed to white people. They add that ‘Images of AI are not generic representations of human-like machines, but avatars of a particular rank within the hierarchy of the human.’ (p.699), and can promote ‘representation harm’ in the following three ways:

  • The racialization of AI representation can amplify the prejudices it reflects: it is how the mainstream sees intelligence.
  • These machines can represent a hierarchy of humans that increases injustice, placing them above already marginalized groups based on their race.
  • This can frame the debate about the risks and benefits of AI to focus the narratives on the ways these might affect predominantly white middle class men.

Representations of AI appear to be perpetuating and exacerbating the injustices currently within current global society. These are the narratives created about AI. But what about the content produced by the tool itself, the large language models fed by the algorithms created within a predominately white milieu? Can AI make value judgements, as values are a large part of what culture is?

Can AI have values?

Values are a core component of the conceptualisation of culture from the work of Hofstede, Trompenaars, Schwartz, Inglehart and the GLOBE studies. Their understanding and their effects on action are an important aspect of what constitutes culture, and the similarities and differences among different cultural contexts. I would assert that anybody or anything to be culturally intelligent would have to understand such values, and more than this, to be able to implement decisions on how to act in a culturally intelligent way by using their own values. I do not believe machines can make these judgements. However, those humans creating AI can.

Image of ‘AI with values’ created in Microsoft Designer powered by DALL-E 3
Image of ‘AI with values’ created in Microsoft Designer powered by DALL-E 3

What you put in you get out. AI is attributed intelligence (by its software) as, for example using large language models, it searches very quickly huge quantities of information (legally or not) available on the internet. It ‘learns’ from its searches. After prompting it then responds by drawing on that information and on that learning.

Let us suppose that it comes across racist posts or publications on the internet. Let us also suppose that it comes across information that opposes such views. It can report a ‘balanced’ summary of what it finds (it can write you an essay of sorts), but can it decide between the two? Which is right and which is wrong? Can it choose sides between, say, Palestinians or the Israeli government in the current Gaza conflict, or the Russian or Ukrainians in the ongoing war? (Again it can write you a ‘balanced’ essay).

Perhaps these are extreme examples, but the point is that ‘decisions’ of this sort are based on value judgements (which themselves are based on very complex factors such as real-life experiences, and of course cultural background). Values, as we know from work such as Hofstede’s, are related to societal culture. Certainly, AI is quite ‘capable’ of generating racist content. It also can deliver racist opinions, but it needs to either be programmed to do so and/or asked specifically by its users to do so, and it needs to be trained on data that is unfiltered by humans, as it normally is, as it is not ‘capable’ of filtering out unless successfully asked to do so by the designer or user.

So, in order to answer the question of the title of this post, other questions have to be answered.

  • Is AI in fact intelligent? This then begs the question:
  • What is intelligence? This in itself is controversial and has a very chequered history.
  • Does cultural intelligence exist, and if so what does it comprise of? And,
  • Should we ask a question that assigns human attributes to machine learning?

But the last words should go to the AI’s themselves. When asked if AI is culturally intelligent, I got responses from which I have quoted the most salient excerpts.

Chat GPT 3.5 replied: Artificial intelligence (AI) lacks inherent cultural intelligence as it operates based on algorithms and data, devoid of personal experiences or cultural understanding.

Google Bard (now Gemini) wrote: Artificial intelligence (AI) is making significant strides in various fields, but its ability to navigate human culture remains a challenge.

Microsoft Bing (now Copilot and based on Chat GPT 4.0) informed me that: Artificial intelligence (AI) is not inherently culturally intelligent. It is a product of human design and is only as culturally intelligent as the data it is trained on.

Anthropic’s Claude was more circumspect, saying: I believe that the question of whether AI systems like myself can be considered culturally intelligent is a complex one with reasonable arguments on multiple sides. Ultimately, I think it depends on how one defines cultural intelligence and what specific capabilities are deemed essential.

Claude’s total response was the most helpful, and I have published the whole dialogue here. Like all the above AI chat systems I interacted with, Claude also produced mostly references that did not exist. When asked the reasons, the following were listed: limitations of training data; hallucination tendencies (which seem to be a tendency of large language models), and; unconscious bias.

Cross-cultural management scholars have worked hard in calling out ethnocentric tendencies in business and management tools, practices and strategies, and their attendant assumptions of universality. AI is another such management tool, a technology that is assumed to travel easily from one (dominant) cultural context to other (less dominant) contexts. As with all knowledge and technology arising predominately from a specific cultural context and from specific sectors of society, this does not necessarily work. Colleagues have also worked hard on conceptualising the types of abilities needed to work effectively and appropriately across cultural contexts, and this involves trying to devise a concept of cultural intelligence.

The concept of intelligence, and its history, is fraught with difficulties, not least in measuring and categorising according to vested dominant interests within certain societies. AI appears to have acquired this tendency to reflect those dominant interests, with those that design and market these systems deflecting from the issues that should be addressed.

There is a huge market for software in general and AI in particular in countries in the global South. But is it appropriate? Does it reflect the interests of people and communities in those countries? AI doesn’t appear to be culturally sensitive, nor capable of understanding and making value judgements, and not likely to be culturally intelligent. It’s also not likely to be capable of ‘understanding’ the vested interests in society that it represents, nor acting independently upon such knowledge unless values related to this are inculcated by those who control it.


Disclaimer: No AI large language model was used in the production of this work, other than the quotes provided above. Two of the images above were created by AI DALL-E 3 responding to my prompts in Microsoft Designer, accessed via Microsoft Copilot within the Edge browser. I didn’t prompt for any specific racial identity to be represented in the images.

Featured image: Image of ‘AI being culturally intelligent’ created in Microsoft Designer powered by DALL-E 3

© Terence Jackson 2024

Leave a Reply