The importance of embracing universal values and business models that prioritize humanity.
Since my involvement with an expert group on artificial intelligence (AI) began in 2018, helping to craft the guidelines that laid the groundwork for the EU’s AI Act proposal, I’ve been engaged in fiery debates about AI’s future. I’ve consistently supported one specific stance: competition is the lifeblood of progress, and the dance of innovation always outstrips the sluggish steps of regulation.
Allowing AI innovators to build algorithms with the freedom to learn and adapt without excessive constraint is essential for fostering innovation. However, this autonomy must be guided by universal values that prioritize humanity. Just as we teach our children fundamental values—including not to steal, not to lie, and to be kind—we should instill these same values into AI.
Properly harnessing AI
It’s important to acknowledge that AI is a tool, and like any such tool, its impact depends on how it’s used. Currently, much of AI’s potential is being channeled into business models that prioritize profit over societal benefit. Too often, we see AI being used for trivial purposes, such as personalizing advertisements, rather than addressing critical issues like healthcare innovation.
In 2015, I founded OKRA.ai with the mission of leveraging AI to accelerate the development and delivery of innovative treatments to patients. This illustrates how AI—with the right intention—can be implemented as part a business model. Now, as chairwoman of the AI Innovation Board at Envision Pharma Group, I lead the integration of AI technology and data with human expertise to accelerate decision-making and drive faster, smarter patient outcomes.
When it comes to AI regulation, we need to embrace the messy, unpredictable nature of the technology and adapt our approach accordingly. Traditional rule-based systems do not suffice in the AI era. Rules should serve as guardrails to ensure that AI operates within ethical boundaries, promoting accountability.
We must also address the prevalent issue of data exploitation by certain business models, often referred to as “Trojan business models.” These models offer seemingly free services while surreptitiously harvesting user data for profit.
For instance, many social media companies have been under scrutiny for their business models using personal data to target users with adverstisements. The result is a pervasive culture of exploitation and manipulation, where our data is often used against us for profit, rather than for the purposes for which it was intended. This exploitation not only undermines user privacy but also erodes trust in AI technologies. We need the alignment of intention with the business model with the outcome.
In order to combat this, we need regulatory frameworks that prioritize data privacy and security while promoting transparency and accountability.
Coming together for a common goal: WEF meeting analysis
Having participated at the beginning of the year in sessions at the 54th Annual Meeting of the World Economic Forum (WEF), I was struck by the common ground shared by leaders from diverse backgrounds when it comes to AI regulation. Amid the complexities of global governance, a consensus emerged around the importance of upholding universal values in shaping the future of AI.
One poignant example came from Ursula von der Leyen, president of the European Commission, and Li Qiang, premier of the People’s Republic of China,as highlighted in their special address at the meeting. Despite their differing geopolitical contexts, both leaders embraced the value of not lying, emphasizing their commitment to combating disinformation and misinformation in the digital sphere. This shared dedication underscores the universal value of truthfulness, a cornerstone of ethical AI regulation.
In a session titled “360° on AI Regulations,” the international panel echoed this sentiment, highlighting the borderless nature of AI’s impact. As AI technologies transcend national boundaries, there is the need for collaborative, cross-border solutions and global cooperation among stakeholders—governments, businesses, civil society, and academia—in shaping responsible AI governance frameworks. AI regulation necessitates a holistic approach that transcends geopolitical divides and embraces the principles of transparency, accountability, and inclusivity.
By encouraging businesses to embrace open data practices, we can ensure greater accountability, trust, and ethical standards in AI development and deployment. Open data initiatives empower researchers, entrepreneurs, developers, and policymakers with access to diverse datasets, enabling them to create more robust and unbiased AI systems.
A difference in values
At the same time, it’s important to acknowledge that not all values are universally aligned. While values such as “do not lie” find resonance across cultures, others, such as “do not steal,” may present divergent perspectives. China, for instance, champions collaboration and global exchange, advocating for a level playing field where all nations can participate in the AI revolution.
This calls for a shift toward a more open-minded approach and the implementation of measures that foster international cooperation in science and technology, ensuring a level playing field for all.
However, achieving genuine openness and equity in technology development requires addressing hurdles related to international trade and investment. For instance, while Chinese companies may establish headquarters in Europe, the reverse is often impeded by restrictions within China. This asymmetry not only impedes fair competition, but also raises concerns about intellectual property rights and technology transfer.
Furthermore, the issue of intellectual property theft and unauthorized replication of products remains a significant obstacle to innovation and investment. When international companies manufacture products in China, they face the risk of having their designs and technologies replicated without consent, eroding their competitive edge and profitability. This dilemma may deter investors within the EU and US from engaging in business ventures with China.
To overcome these challenges, there’s a pressing need for the harmonization of regulatory frameworks and standards across borders. If each country enforces its own rules without considering the global ramifications, it creates a fragmented and uncertain landscape for businesses and investors alike. In the realm of AI regulation, adherence to European standards should be expected of international companies seeking to operate in the EU market, ensuring a
little playing field and fostering trust among stakeholders.
In conclusion, as we navigate the transformative landscape of the AI revolution, one value remains constant: the importance of respecting our universal human values.
Every revolution throughout history has ultimately circled back to this fundamental principle: the recognition of our shared humanity and the imperative to uphold values such as honesty, integrity, and respect.
But how do we ensure that these values guide our actions in the realm of AI? How do we find common ground amid diverse perspectives and interests? The answer lies in dialogue, collaboration, and a willingness to listen and learn from one another.
About the Author
Loubna Bouarfa, PhD, is the chairwoman of the AI Innovation Board at Envision Pharma Group. She also founded OKRA.ai.
485 Route 1 South,
Building F, Suite 210
Iselin, NJ 08830