• Orange has been committed to implementing responsible and ethical AI for many years.
• The Group’s action in this area is embodied and supported by the Data and AI Ethics Council, made up of independent figures, lawyers, academics etc.
• Customer relations roles, Computer Vision offerings etc.: The achievements and progress enabled by the Council support the value creation strategy. Ethical validation is a guarantee of quality.
After Bletchley Park in 2023 and Seoul in 2024, Paris is hosting the 3rd AI Action Summit on 10 and 11 February 2025. More than a thousand players from government, the private sector and civil society will gather to discuss the future of AI and its development in the public interest. Trustworthy AI and ethics-related topics in particular will be at the heart of these discussions.
In this article, Roxane Adle-Aiguier, PMO of the Data and AI Ethics Council, and Émilie Sirvent-Hien, Director of the ethical and responsible AI Research Programme at Orange, will speak about the efforts made within the Group to address the issues of ethics, responsibility and sustainability associated with the implementation of AI technologies.
Orange’s commitment goes beyond the theoretical framework; it is also about working towards the operationalisation of ethical AI.
In 2021, Orange created a Data and AI Ethics Council. Can you explain the role and assignments of that body?
Roxane Adle-Aiguier — The Council is made up of ten external expert and independent advisors responsible for issuing advisory opinions to the Executive Committee on topics related to the governance of responsible AI or concrete use cases submitted to it. It also monitors the implementation of the ethical guidelines and principles validated by the Executive Committee. It has defined an ethical framework for AI and Data technologies embodied in Orange’s Data and AI Charter in line with the Group’s corporate purpose, “to be a trusted player that paves the way to a responsible digital world welcoming each and every person”.
At Orange, reflections on AI ethics did not begin with the creation of the Council. Our commitment dates back more than a decade, to the time when research into AI technologies began, and takes shape both within and outside the company. In 2016, Mari-Noëlle Jégo-Laveissière (now CEO of Orange Europe) was a member of the European Commission’s High-level expert group on artificial intelligence, as an expert qualified to lay the foundations of trustworthy AI for Europe, foundations of the AI Act as it exists today. In 2020, we also launched the ethical and responsible AI Research Programme, led by Émilie, to orchestrate research in this field on ethical AI issues and advise the Group.
Émilie Sirvent-Hien — The Research Programme aims to dynamically advise the Group, enabling it to transform to closely match technological developments and regulatory frameworks. It relies on a team of researchers with diverse skills and profiles ranging from technology to human sciences. Currently, the Programme takes part in standardisation, and structures its work around several key themes, such as equity and inclusion, explainability and transparency of algorithms, the environmental impact of AI systems and the culture of responsibility. Our scope of action is not simply theoretical or fundamental, it also involves working towards the operationalisation of ethical AI, through the study of use cases or the development and sharing of guidelines for the professions.
In 2022, Orange deployed its Data and Artificial Intelligence Ethical Charter. Since then, what have been the Council’s other major deliverables?
RAA — The signing, by Christel Heydemann and Michael Trabbia, and publication of the Charter drawn up by the Council represented a founding milestone in affirming our values, our positioning and our vision on ethical issues related to AI and Data. Since then, the Council has finalised multiple projects that, for many, and as Émilie explained, address these issues from a concrete and operational angle. There was the creation of awareness training for our employees, on AI in general and ethical AI in particular, the production of recommendations for the customer relations professions, the review and validation of the Group’s various guidelines. The Council also supported Orange Business to ensure that the Computer Vision image recognition offering was in line with ethical principles, by studying seven particular use cases.
How can we manage the internal dynamic around ethical AI and evaluate the degree of support and adaptation of teams?
ÉSH — We have a series of monitoring indicators, such as the number of AI-based systems, products and services; the number of employees who have received training; and the number of use cases studied or projects ethically analysed by business units, subsidiaries and divisions. Ethical officers in each country manage the systems by deploying the guidelines and methodologies we produce.
Beyond numbers or tools, the approach remains profoundly human and does not go one way. We see many employees asking questions and expressing expectations — we are here to take them into account and answer them. This is all the more true as the explosion of generative AI has not only made AI tools accessible by more people, but it has also allowed everyone to ask questions and be more aware of the need for responsible AI.
Is the approach also valued externally by customers or providers?
RAA — Of course, it reassures our customers of Orange’s adherence to ethical values. Coming back to the example of Orange Business’ aforementioned Computer Vision offering, taking into account the Council’s recommendations and validation is an advantage and a guarantee of quality. We also strive to promote these issues with our ecosystem of providers and subcontractors, by including clauses in our and sourcing agreements on respect for our values, for AI ethics, just like what is done for CSR.
ÉSH — The value associated with the approach is also measured in terms of reputation, over the long term. Not acting means running the risk of losing business, losing the trust of our employees and being less attractive to tomorrow’s talent, who are very sensitive to these issues.
Passed in 2024, the AI Act will come into force in 2026. Is the Group ready yet?
RAA — We were able to anticipate the arrival of the AI Act, through our participation in the High-level expert group on artificial intelligence; our contribution to the White Paper on AI, A European approach to excellence and trust; and the implementation of processes and training to operationalise compliance with the principles of our ethical charter. Being able to prepare our entities was also a goal associated with the creation of the Data and AI Ethics Council and, in fact, we already have officers, tools and risk management systems in place, which gives us a head start. However, it will be necessary to pay particular attention to the specificities of the regulations and the adaptation of teams, which we are starting to do with an initial e-learning training course, and work with business lines to accurately assess the potential impacts.
What do you expect from the Paris AI Action Summit?
ÉSH — This third edition must be more concrete and action-generating. We hope that the Summit will be a springboard for establishing collective momentum around key issues such as the consequences (positive or negative) of regulations, the environmental and societal impact of AI, and trust and security. From the Group’s point of view, it is also a time for discussions that will allow us to enrich our ecosystem of stakeholders, as well as our own ethical AI strategy.