Big tech companies’ products and services are increasingly prominent in our everyday lives and have a significant potential for societal impact. Digital services can contribute positively but can also risk seriously harming human rights and social structures. A failure to manage these impacts can damage the companies’ social license to operate and the associated reputational, legal and regulatory risks may have material implications.
Human rights risks facing digital services are linked to complex issues such as the handling and commercialisation of personal data, integrity, and freedom of expression, but also potential impacts affecting the democratic system or furthering extremism and terrorism. Legislators and regulators are trying to establish regulation and control systems to keep up with the pace of technological development, but tech companies have grown rapidly in a relatively short period of time, and their businesses are dynamic, diversified, transnational and technologically complex. In many cases, human rights risks can be exacerbated by the business models, corporate culture and incentive structures of the companies involved.
Indeed, many online platforms have business models which centre on maximising clicks and interactions with the content, and serving users with information they are interested in. This has great benefits for transparency and connectivity, but also comes with notable risks. The dissemination of malicious, explicit or untrue content, and/or of extreme or one-sided opinions, can cause far-reaching damage. Apart from jeopardising individual human rights, especially those of vulnerable groups, hate speech and misinformation can have system-level consequences such as hyper-polarisation, violence, discrimination and erosion of democracy.
This also poses material business risks. Impact on users is increasingly drawing attention and criticism from authorities, media and the public alike, and whistleblower accounts and reports about questionable social risk management practices are damaging tech companies’ reputation and license to operate. The deterioration of users’ trust and loyalty can impact brand value and future revenues. Fines and lawsuits are becoming more frequent and require companies to allocate financial and human resources to handling them. As more regulation is likely, companies would benefit from constructive relations with policymakers and proactively driving and aligning themselves with best practice.
How does the Council on Ethics work with big tech and human rights?
The Council has been working on this topic since 2019, including producing an investor expectations document together with the Danish Institute for Human Rights in 2020. In March 2023, the Council launched a collaborative engagement, supported by over 30 investors with over EUR 7 trillion in combined assets under management. This three-year initiative seeks to effect change in alignment with the UN Guiding Principles for Business and Human Rights.
The investor group will engage Alibaba, Alphabet, Amazon, Apple, Meta, Microsoft and Tencent. The dialogues will, depending on the company, focus on business model, corporate culture and structures, content-related issues, vulnerable groups (especially children), access to remedy and stakeholder engagement (including public policy engagement). The primary goal is that tech giants take concrete measures to address operational and human rights risks and impacts pertaining to their products and business model, and to encourage a more transparent reporting on the related practices. The collaboration’s activities and results will be regularly reported.
Summary of activities and outcomes during 2023, is available here.