Tech Leadership: Addressing AI and Cancel Culture
28 Nov
Fortunately, some leaders anticipate critical solutions: Sam Altman uses cryptography to protect humans from AI, and Elon Musk changes business models and mobilizes the community to prevent cancellation.
After OpenAI’s Board of Directors fired Sam Altman, accusing him of ‘not being consistently sincere,’ many company employees who had helped create ChatGPT resigned or threatened to do so unless their CEO was reinstated. According to the news, the origin of this situation was the tension within OpenAI between those who advocate for the rapid development of artificial intelligence (AI) and those who prefer to proceed more slowly and cautiously. Meanwhile, OpenAI’s new managing board decided to reinstate the CEO, and many expect that Altman will contribute to making wise choices about what to do next with such a powerful technology. As Sam Altman states, “[ChatGPT-4] is a tool largely under human control. It waits for someone to give an input, and what is worrying is who has control of these inputs.” It’s hard not to agree with his words, considering AI is changing the world at an unprecedented speed, which advises us all to be very attentive to its governance.
Parallel to this, in the same week, Elon Musk made what some considered an “inappropriate” comment on the social network X (formerly Twitter), leading to an announced boycott from significant advertisers who also threatened to “cancel” the world’s richest man, even calling for his removal from companies he created, which brings us to the heart of the matter.
In both AI and social networks, opting for business models that depend on exploiting user data is dangerous, especially when sophisticated algorithms that learn to influence behavior come into play. Perhaps it’s no coincidence that Facebook removed the “dislike” button, now having all thumbs pointing upwards, promoting only enthusiastic reactions where people confirm they are correct, supportive, and appreciated. After all, agreement is more seductive than disagreement. Hence, the algorithms of such business models learn how to feed such seduction to maximize the time users spend on social networks and “monetize” their attention through advertising.
This is why social networks often promote content that polarizes and alienates users, painting a one-dimensional world that aims to please each individual by validating their emotions and beliefs. They make us forget that other parties can also have good ideas, other clubs can have good players, and other religions can have their faith. They encourage us to make unfair and dangerous generalizations, portraying a simplistic and convenient reality. Unfortunately, things may not stop there. With AI, the algorithms of advertising business models can become potentially more nefarious, intensifying stereotypes and leading to a society systematically canceling divergence. Symptomatically, even with artificial intelligence in its initial phase, the algorithms used in the advertising business strategies of social networks have already considerably limited the diversity of ideas, contributing to the increase in intolerance and the growth of violence.
Fortunately, some leaders are capable of anticipating solutions. Sam Altman understands the potential of advanced cryptography to protect humanity against the risks of AI, as seen in the Worldcoin project, which uses a decentralized approach to prevent AI from manipulating us by pretending to be human. However, AI will likely learn to perform calculations to break any code, making it crucial to prevent it from falling into the wrong hands. Thus, the global community needs to stay one step ahead in the race against those who misuse AI. Elon Musk is helping to make this step a reality by changing the business model of the social network X, adopting a subscription system (against the opinion of many) that reduces dependence on potential rules imposed by advertisers, and introducing the “Community Notes” feature that allows users themselves to contribute annotations and contextual information about posts to promote truthfulness and collective understanding without censorship or cancellations.
As for the solution of halting the development of AI, advocated by some, it is believed to be counterproductive, as not everyone would do the same, creating a very dangerous vacuum that enemies of democracy and freedom could exploit. Furthermore, it is practically impossible to stop the progress of AI. In this context of inevitable development, a decentralized, cryptography-based approach is the most beneficial solution, providing the necessary openness to obtain larger training sets, essential for competitive leadership, while still ensuring the indispensable transparency for human validation of AI processes. This approach is crucial to ensure the ethical use of AI and prevent the reinforcement of a cancellation culture in the digital age.
Dario de Oliveira Rodrigues
No comments yet