How to Build Responsible AI: Insights from Web Summit in Lisbon

How to Build Responsible AI: Insights from Web Summit in Lisbon

The Web Summit in Lisbon, one of the biggest technology conferences worldwide, attracted more than 70,000 attendees from 153 countries, featured 806 speakers, and showcased 2,608 startups, according to the official press release.

The conference was a success despite the prior controversy caused by its founder, Paddy Cosgrave, whose comments on the Israel-Hamas war resulted in social media backlash and withdrawals from major tech companies such as Meta and Google, ultimately leading Cosgrave to step down as CEO.

“Our immediate task at hand is returning the focus to what we do best: facilitating discussions among everyone involved in technological progress,” said Cosgrave’s successor, Katherine Maher in the opening of Web Summit. “In a present where technology is interwoven into every aspect of our lives, and in a future where it represents our greatest hope and our greatest disruptor, Web Summit’s role as a place for connection and conversation is more urgent now than ever.”

Unsurprisingly, one of the most prevalent discussions revolved around responsible AI and how to regulate the technology’s impact on society. Meredith Whittaker, founder of the not-for-profit secure messaging app Signal who also led the Google walkouts in 2018, posed some essential questions:

Katherine Maher, CEO, Web Summit, centre, António Costa Silva, Minister of the Economy and Maritime Affairs, Government of Portugal, left, and Carlos Moedes, Mayor, City of Lisbon, on Centre Stage during the opening night of Web Summit 2023 at the Altice Arena in Lisbon, Portugal. Photo by Piaras Ó Mídheach/Web Summit via Sportsfile

Regulation Failure in the 90s Paved the Way for the Surveillance Business Model

“We have to ask ourselves: why now? Why did the term AI, first coined in 1956 and arguably more of a marketing term than a technological one, resurface today? Why does it have so much power over the shape of the industry, the shape of investment, and how we envision the future?”

According to Whittaker, the answers lie in the Clinton administration’s failure in the late 90s to regulate the commercialization of network computation or the internet business model. This oversight allowed the boom of the surveillance economy to become the engine of this business model, especially surveillance advertising. Consequently, firms that captured this value and established dominance grew into powerful entities with extensive resources in data, computation, and vast networks. They commanded massive markets from which they could continuously extract data and distribute products and services.

“And these are exactly the resources that, in 2012, were revealed to be definitive in this AI revival. So, if we look at AI that requires huge amounts of data, this ‘bigger is better paradigm’ that is driving the generative hype cycle now, we need to recognize it as a derivative of the concentrated power that accrued in the hands of a few companies via their monopolization of the surveillance business model. So my argument is that we need to go back and atone for the failures of the 90s if we want to address AI’s risks,” Whittaker explains.

How to build responsible AI: Andrew McAfee, Principal Research Scientist, MIT, on Centre Stage during day one of Web Summit 2023 at the Altice Arena in Lisbon, Portugal. Photo by Eóin Noonan/Web Summit via Sportsfile

Responsible AI: Making it easy to correct mistakes instead of impossible to make them?

A different approach towards regulation comes from Andrew McAfee, principal research scientist at MIT and co-founder and co-director of the MIT Initiative on the Digital Economy at the MIT Sloan School of Management. In his talk titled “How do we regulate AI,” he distinguished between two camps: the team “permissionless innovation” and “upstream governance,” emphasizing that he is part of the first group.

He said that while fundamental disagreements exist between upstream governance and permissionless innovation, both sides recognize the high stakes involved, including the impact on living standards and the need for regulation against potential harm. They also agree on the benefits and risks that follow by democratizing access to powerful technologies such as AI.

McAffe continued with an example of how he thinks responsible AI should be approached: “​​Early last decade, we realized that there were these men on subways in America who were using their smartphone cameras inappropriately to take photos under the skirts of women as they boarded the subway on their way to work. This misuse of technology clearly violated basic rights and freedoms. At the time, existing peeping Tom laws aimed at protecting privacy did not explicitly prohibit this behavior. Once this legal loophole became obvious, legislators in Massachusetts, my home state, acted with remarkable speed. Within just two days of a court ruling that this act was not illegal under current laws, they amended the legislation to make it illegal.

This response exemplifies what those of us who advocate for “permissionless innovation” consider appropriate. It would have been counterproductive to ask smartphone manufacturers like Apple to seek permission before adding cameras to their phones or to mandate users to certify their intent not to misuse the camera upon purchase. To put it in the words of Wikipedia founder Jimmy Wales: “It’s better to make it easy to correct mistakes instead of impossible to make them.”

Building a responsible AI ecosystem

Ricardo Baptista Leite, CEO of HealthAI, a global agency for Responsible AI in the health sector, discussed the challenges and approaches towards building an ecosystem that puts guardrails in place without stifling innovation.

“There’s an interesting book called Power and Progress that looks at the introduction of new technologies by humanity over the last 1000 years”, explains Leite. “And almost always consistently, we’ve had people saying, “This technology will make our life better.” And if you look at the data, it didn’t make life better for the majority but made life better for a few. The times that we were successful as humanity in using technology in a way that benefited all was when we embedded from the start inclusivity, fairness, all these very important principles, accountability, within the inception process and the development of the technologies, and then their deployment and application in the real world. And that’s what I think we need to learn from the lessons in the past to do better in the future, knowing we’re going to make mistakes as we move forward.”

“Self-regulation isn’t enough to protect everyone, but too much legislation can hinder innovation. Finding the balance is key.”

Leite also points out that while some corporations genuinely want to act responsibly, they often face challenges connecting with those who truly understand the needed conversations. At the same time, there are negative forces within the corporate world. Leite refers to their actions as “digital colonization,” where they exploit data, especially in healthcare, without any oversight. This widens the digital divide and increases economic and social inequalities. Leite believes that the solution lies in governments and corporations working together. He argues that self-regulation isn’t enough to protect everyone, but too much legislation can hinder innovation. Therefore, finding a balance is key.

Are we Moral Enough for AI?

“No, we are not moral enough to handle some of these tools without proper legislation and regulation,” said Brittany Kaiser, former Cambridge Analytica whistleblower and director of Own Your Data Foundation during a panel discussion on last day of Web Summit.

“I observed that these are merely tools, and their moral use depends on those who employ them. Certainly, Cambridge Analytica had numerous clients who utilized data positively. However, when it got into the wrong hands, data was weaponized to target people, incite violence and racial hatred, disrupt elections, and provoke crises rather than prevent them. Until we have established fundamental principles regarding data ownership, fiduciary duties of custodians, tracking and traceability, consent frameworks, transparency, and the monetization of personal data, we are in a bad place. Without these foundations, examining how data feeds into algorithms, especially those underpinning artificial intelligence, is risky. We are in a dangerous position because there is still no adequate legislation and regulation on the basic building blocks that take us to a responsible AI one day.”

“Our challenges are looking at the problems we didn’t solve with internet search algorithms first and how AI feeds into that.”

Lexi Mills, digital marketing expert and CEO of Shift6 Global PR agency ads: “Many of the problems we see in AI, especially regarding biases, exist in all technology. And I’m a search specialist. So the internet algorithms are a reflection of one aspect of humanity. But it’s not a true reflection. In some way, it’s an opinion that’s designed by algorithms designed to serve whatever the user desires. But there’s a difference between getting what you want and what you need. And then, when we look at the training sets, people have coded the training. So there’s an issue there. And then there’s the data AI is learning from. But the data itself, how it is fed in, is determined by traditional Google optimization. If you have a strong presence on Google, that presence will be mirrored within an AI model. So our challenges are looking at the problems we didn’t solve with internet search algorithms first and how AI feeds into that.

But we can also spin this both ways. Power can be used for good, or it can be used for bad. And if you go back five years ago, and searched for a pilot job, and clicked images, you wouldn’t see many women in there. But that’s changed now. And some of that has changed deliberately, with people trying to get more women into being a pilot. Therefore, there are more images, and some people have even actively hacked those results, so that’s changing. If we have an objective and know how the technology works, then the person using it can make choices. The problem is, that we also have economies and profitability playing into this. And that will always be a challenge for responsible AI because where the money goes, we see that mirror certain behaviors within algorithms.”


Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to The Overview.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.