Some tough questions for AI

0
4


Artificial intelligence promises to solve everyday dilemmas such as correct tooth brushing to slightly deeper concerns, such as justifying individual existence to the world.

Despite the excessive agility lavished by startups and technology companies in general, there is value in stopping to review the fundamental paradox that lies in this revolution of revolutions that we are experiencing in which there is a growing tension between efficiency and equity.

Problem or solution?

As in the stories of Jorge Luis Borges in which labyrinths led to other labyrinths, each “solution” offered by artificial intelligence brings new ethical crossroads. The technology that promises to democratize knowledge simultaneously threatens to deepen the social gaps that we already suffer from.

The dilemma is not trivial. In a productivity-oriented global economy, the argument for efficiency sounds compelling: advances in AI can improve health systems, make urban mobility safer, and raise business productivity.

Only these achievements hide an uncomfortable question: who benefits and who is left behind?

The bulk of the benefits of AI are being concentrated in the hands of large corporations and in countries with high technological development, while developing nations and the most vulnerable sectors of society are remaining as mere and passive consumers of the technology. .

The clarity of ethics

Is it ethically justifiable to optimize systems to achieve more speed, precision and profitability, if this means deepening social and economic inequalities?

It comes to mind one afternoon when, sitting in a restaurant in the center of Coyoacán, I watched a bolero player use his smartphone in one hand while shining shoes. The scene, apparently contradictory, shows a scene of the ambivalent relationship we have with technology: we can all access it, but not all of us can take advantage of it in the same way.

Artificial intelligence seems to me like one of those fair mirrors that distort the image to the point of making it hilarious: it gives us back a reality deformed by algorithmic biases, only instead of hearing laughter at the fair, we hear applause and total credibility.

What happens when these distortions affect decisions about who gets a job, credit, or even parole? Efficiency then becomes a sophisticated form of discrimination.

New layers of complexities are added to this problem in countries like ours, where inequality is a historical mark. While Silicon Valley celebrates each advance in data processing, many regions of Mexico struggle to achieve a stable internet connection. The digital divide threatens to become the most inequitable and unforgiving abyss we have ever experienced.

Do we embrace AI?

Anyone who wonders whether or not we should adopt artificial intelligence is greatly confused – that train has already left the station – the focus lies on directing it towards a more equitable destination.

It is essential as a society to define if we want to be like that Rulfo character who sacrificed his soul for modernity, or if we can find a path where technological efficiency does not devour the appetite for social justice.

The solutions are not simple, but they cannot be perceived as impossible. There is a lack of regulatory frameworks that balance innovation with social protection shields, public policies that guarantee that the benefits of AI are not concentrated only in the hands of a minority, and serious reflection on what type of future we seek to build.

Technique gives us things in series; art and crafts, unique pieces, said Octavio Paz. Perhaps the key is to find a middle ground where the efficiency of artificial intelligence does not crush human uniqueness, where the pursuit of productivity does not sacrifice the genuine aspiration for equality.

The difficult questions

In order not to fail to fulfill the promise of the title of this writing, here are just some of the questions with which we could corner any AI at this moment:

  • How can you ensure that AI development is not just focused on short-term gains, but on long-term benefit to society?
  • Should humans have the final say in crucial AI-assisted decisions, even if this reduces the system’s efficiency?
  • How do you determine what types of decisions this should apply to?
  • Given that training and deploying AI requires significant resources (power, data, servers), how do you ensure that access to these resources does not further deepen the inequality between wealthy and disadvantaged actors?
  • How could AI be decentralized so that more actors – and not just large companies – have the capacity to develop, control and benefit from it?
  • Does standardizing AI to optimize processes compromise or dilute cultural identities and diversity?
  • How could AI systems respect diversity without sacrificing efficiency?
  • Is it ethical for AI to optimize the design of digital platforms to manipulate user behaviors to maximize usage time and attention?
  • What role should AI play in education and job training to help those who might be displaced by it retrain in new skills?
  • Is it the responsibility of those who develop AI to ensure that mass unemployment is not generated?
  • How can companies make the goals of their AI developments transparent so that users understand when a system is optimized for commercial benefits versus social or collective efficiency benefits?
  • What should be the minimum digital rights of citizens in an era of ubiquitous AI, especially regarding privacy, anonymity and data protection? Is it possible to protect these rights without compromising the efficiency of AI systems?
  • To what extent should AI autonomy be allowed in areas such as health and justice?
  • Is it more important to maintain a high level of human supervision to ensure fairness, even if this implies a loss of efficiency?
  • What practices should be adopted to prevent AI data collection from reinforcing existing biases against marginalized communities?
  • Can this need be balanced with the training efficiency of the models?
  • Should there be citizen participation in key decisions about how AI is developed and used?
  • How would this affect the efficiency of AI development?
  • Should AI developers take direct responsibility for biases that their technology may replicate, even when efficiency favors them?
  • Should AI developments that significantly increase efficiency (in areas such as health or transportation) be privately controlled, or should they be treated as public goods to ensure equitable access?
  • What regulatory frameworks can balance the need for rapid innovation in AI (efficiency) with the protection of vulnerable populations (equality)?
  • Is it more important to ensure that AI decision-making processes are transparent and fair, or to prioritize the efficiency and accuracy of those decisions?
  • How can the efficiency of AI be harnessed to reduce, rather than exacerbate, the digital divide between different socioeconomic or geographic groups?
  • Should the efficiency benefits generated by AI be redistributed (for example, through a universal basic income) to mitigate rising inequality?
  • How far?

Contact:

* Eduardo Navarrete is a specialist in Futures Studies, journalist, photographer and Head of Content in UX Marketing.

Linkedin: https://www.linkedin.com/in/eduardo-navarrete

Mail: (email protected)

Instagram: @elnavarrete

Follow information about business and current events in Forbes Mexico


LEAVE A REPLY

Please enter your comment!
Please enter your name here