AI and CSR: How can they work together?

It may be an apocryphal story, but it’s been said that in 1952, barely 70 years ago, Henry Ford Jr., the son of the renowned Ford Motors Company founder, and Walther Reuther, United Auto Workers chairman, were talking a walk and having a chat inside one of the carmaking facilities. While watching automation at work, the machines carrying out their predetermined duties, they reciprocated with questions: “Walther, how are you going charge union fees to these machines?”. “Henry, I would worry about how these machines will buy cars”.

This story informs us of how important it is, not only to respond accordingly to challenges but how essential it is to formulate the right questions. This encounter would have occurred only four years before the Dartmouth Conference and also the birth of the Information Society (as per Alvin Toffler).

 

CSR is more than ever an increasingly important concept in the business world. In a nutshell, CSR refers to the responsibility that companies have to act in a way that benefits society as a whole. This includes everything from environmental sustainability to fair labor practices to philanthropy. However, due to technological development and its ever-exponential effects, there has been a growing awareness of the need for CSR in relation to AI governance. As AI becomes more prevalent and powerful, companies must consider how their use of AI impacts society and take steps to ensure that their use of AI aligns with ethical and moral principles.

Let us explore the relationship between CSR and AI governance and the steps that companies can take to ensure that they are responsible users of AI. As a result, let’s ask the right questions first.

What is AI Governance?

AI governance refers to the rules, regulations, and ethical principles that guide the development and use of artificial intelligence. Lately, there has been a recognition of the need for robust AI governance frameworks to ensure that AI is developed and used in a responsible manner. This includes issues such as bias, transparency, accountability, and human oversight. Well, what does this exactly mean?

While AI Governance frameworks are essential, they cannot exist in a vacuum. This is where CSR comes in. Companies that are committed to CSR understand that they must act in a way that benefits society as a whole, and this includes their use of AI. You will discover that CSR’s underlying principles match those of AI Governance.

 

The Risks of AI :

There are numerous risks associated with the use of AI, particularly if it is not governed in a responsible way. These risks include:

  • Bias: AI systems can be biased if they are trained on data that reflects historical prejudices or inequalities. This can lead to discriminatory outcomes, such as facial recognition software that is less accurate for people of color.
  • Lack of transparency: AI systems can be opaque, meaning it’s difficult to understand how they arrived at their decisions. This can be problematic if those decisions have significant consequences, such as in healthcare or finance. This has been addressed as Explicability which incorporates intelligibility for how AI acts.
  • Accountability: It can be challenging to assign responsibility when something goes wrong with an AI system. If an autonomous vehicle causes an accident, which is to blame? The manufacturer, the programmer, the user?

Although pretending there can be a universal set of rules is basically wishful thinking, we do acknowledge companies have taken to themselves to self-regulate; for example, Microsoft Standards of Business Conduct and our very own Bravent CSR Policy.

And now, let’s formulate the correct answers. These are some general steps that companies can take:

  • Conducting ethical assessments: Before deploying an AI system, companies should consult about the potential impacts of the system and evaluate whether those impacts align with their ethical principles.
  • Ensuring transparency: Companies should strive to make their AI systems as transparent as possible so that users and other stakeholders can understand how decisions are being made. Algorithmic stuff must be explainable.
  • Providing human oversight: While autonomous AI systems can be highly effective, they should never be completely autonomous. There should always be a human in the loop to ensure that decisions align with ethical principles.
  • Addressing bias: Companies should take steps to ensure that their AI systems are not perpetuating biases or inequalities. This might involve using more diverse training data or implementing algorithms that mitigate bias, a process that is seemingly expensive. For instance, the work developed by Iguales AI Bot for gender equality in the workplace.
  • Engaging with stakeholders: Companies should engage with a range of stakeholders, including employees, customers, and communities, to understand their perspectives on the use of AI and to ensure that the use of AI aligns with their interests.

 

The Benefits of Responsible Use of AI

While there are certain risks associated with the use of AI, there are also numerous benefits. AI has the potential to transform industries, improve efficiency, and enhance our quality of life. However, these benefits can only be fully realized if AI is developed and used in a responsible manner. Companies that prioritize CSR in their use of AI are more likely to realize the benefits of AI while minimizing the risks.

There is a most important question: what would a new AI-governed CSR look like? It is quite hard to answer at length for the purpose of this article; however, here’s a brilliantly formulated idea by Professor Amy Webb at New York University Stern School of Business and CEO of The Future Today Institute. Her initiative is known as the Global Augmented Intelligence Alliance.

Contact us, one of our experts can advice you!